00:00:00.000 Started by upstream project "autotest-nightly" build number 4250 00:00:00.000 originally caused by: 00:00:00.000 Started by upstream project "nightly-trigger" build number 3613 00:00:00.000 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.079 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.080 The recommended git tool is: git 00:00:00.080 using credential 00000000-0000-0000-0000-000000000002 00:00:00.082 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.133 Fetching changes from the remote Git repository 00:00:00.135 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.209 Using shallow fetch with depth 1 00:00:00.209 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.209 > git --version # timeout=10 00:00:00.275 > git --version # 'git version 2.39.2' 00:00:00.275 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.331 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.331 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:08.722 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:08.733 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:08.746 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:08.746 > git config core.sparsecheckout # timeout=10 00:00:08.757 > git read-tree -mu HEAD # timeout=10 00:00:08.774 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:08.792 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:08.792 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:08.880 [Pipeline] Start of Pipeline 00:00:08.894 [Pipeline] library 00:00:08.896 Loading library shm_lib@master 00:00:08.897 Library shm_lib@master is cached. Copying from home. 00:00:08.910 [Pipeline] node 00:00:23.912 Still waiting to schedule task 00:00:23.913 Waiting for next available executor on ‘vagrant-vm-host’ 00:00:42.000 Running on VM-host-WFP1 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:42.001 [Pipeline] { 00:00:42.013 [Pipeline] catchError 00:00:42.014 [Pipeline] { 00:00:42.028 [Pipeline] wrap 00:00:42.039 [Pipeline] { 00:00:42.047 [Pipeline] stage 00:00:42.049 [Pipeline] { (Prologue) 00:00:42.069 [Pipeline] echo 00:00:42.070 Node: VM-host-WFP1 00:00:42.077 [Pipeline] cleanWs 00:00:42.086 [WS-CLEANUP] Deleting project workspace... 00:00:42.086 [WS-CLEANUP] Deferred wipeout is used... 00:00:42.094 [WS-CLEANUP] done 00:00:42.332 [Pipeline] setCustomBuildProperty 00:00:42.408 [Pipeline] httpRequest 00:00:43.029 [Pipeline] echo 00:00:43.031 Sorcerer 10.211.164.101 is alive 00:00:43.042 [Pipeline] retry 00:00:43.044 [Pipeline] { 00:00:43.059 [Pipeline] httpRequest 00:00:43.064 HttpMethod: GET 00:00:43.064 URL: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:43.065 Sending request to url: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:43.075 Response Code: HTTP/1.1 200 OK 00:00:43.075 Success: Status code 200 is in the accepted range: 200,404 00:00:43.076 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:48.787 [Pipeline] } 00:00:48.805 [Pipeline] // retry 00:00:48.812 [Pipeline] sh 00:00:49.094 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:49.109 [Pipeline] httpRequest 00:00:49.550 [Pipeline] echo 00:00:49.552 Sorcerer 10.211.164.101 is alive 00:00:49.563 [Pipeline] retry 00:00:49.565 [Pipeline] { 00:00:49.579 [Pipeline] httpRequest 00:00:49.584 HttpMethod: GET 00:00:49.584 URL: http://10.211.164.101/packages/spdk_d1c46ed8e5f61500a9ef69d922f8d3f89a4e9cb3.tar.gz 00:00:49.585 Sending request to url: http://10.211.164.101/packages/spdk_d1c46ed8e5f61500a9ef69d922f8d3f89a4e9cb3.tar.gz 00:00:49.592 Response Code: HTTP/1.1 200 OK 00:00:49.592 Success: Status code 200 is in the accepted range: 200,404 00:00:49.593 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_d1c46ed8e5f61500a9ef69d922f8d3f89a4e9cb3.tar.gz 00:01:55.460 [Pipeline] } 00:01:55.478 [Pipeline] // retry 00:01:55.486 [Pipeline] sh 00:01:55.768 + tar --no-same-owner -xf spdk_d1c46ed8e5f61500a9ef69d922f8d3f89a4e9cb3.tar.gz 00:01:59.063 [Pipeline] sh 00:01:59.344 + git -C spdk log --oneline -n5 00:01:59.344 d1c46ed8e lib/rdma_provider: Add API to check if accel seq supported 00:01:59.344 a59d7e018 lib/mlx5: Add API to check if UMR registration supported 00:01:59.344 f6925f5e4 accel/mlx5: Merge crypto+copy to reg UMR 00:01:59.344 008a6371b accel/mlx5: Initial implementation of mlx5 platform driver 00:01:59.344 cc533a3e5 nvme/nvme: Factor out submit_request function 00:01:59.362 [Pipeline] writeFile 00:01:59.377 [Pipeline] sh 00:01:59.659 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:59.670 [Pipeline] sh 00:01:59.951 + cat autorun-spdk.conf 00:01:59.951 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:59.951 SPDK_TEST_NVMF=1 00:01:59.951 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:59.951 SPDK_TEST_URING=1 00:01:59.951 SPDK_TEST_VFIOUSER=1 00:01:59.951 SPDK_TEST_USDT=1 00:01:59.951 SPDK_RUN_ASAN=1 00:01:59.951 SPDK_RUN_UBSAN=1 00:01:59.951 NET_TYPE=virt 00:01:59.951 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:59.958 RUN_NIGHTLY=1 00:01:59.960 [Pipeline] } 00:01:59.974 [Pipeline] // stage 00:01:59.988 [Pipeline] stage 00:01:59.991 [Pipeline] { (Run VM) 00:02:00.005 [Pipeline] sh 00:02:00.286 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:02:00.286 + echo 'Start stage prepare_nvme.sh' 00:02:00.286 Start stage prepare_nvme.sh 00:02:00.286 + [[ -n 7 ]] 00:02:00.286 + disk_prefix=ex7 00:02:00.286 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:02:00.286 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:02:00.286 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:02:00.286 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:00.286 ++ SPDK_TEST_NVMF=1 00:02:00.286 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:00.286 ++ SPDK_TEST_URING=1 00:02:00.286 ++ SPDK_TEST_VFIOUSER=1 00:02:00.286 ++ SPDK_TEST_USDT=1 00:02:00.286 ++ SPDK_RUN_ASAN=1 00:02:00.286 ++ SPDK_RUN_UBSAN=1 00:02:00.286 ++ NET_TYPE=virt 00:02:00.286 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:00.286 ++ RUN_NIGHTLY=1 00:02:00.286 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:02:00.286 + nvme_files=() 00:02:00.286 + declare -A nvme_files 00:02:00.286 + backend_dir=/var/lib/libvirt/images/backends 00:02:00.286 + nvme_files['nvme.img']=5G 00:02:00.286 + nvme_files['nvme-cmb.img']=5G 00:02:00.286 + nvme_files['nvme-multi0.img']=4G 00:02:00.286 + nvme_files['nvme-multi1.img']=4G 00:02:00.287 + nvme_files['nvme-multi2.img']=4G 00:02:00.287 + nvme_files['nvme-openstack.img']=8G 00:02:00.287 + nvme_files['nvme-zns.img']=5G 00:02:00.287 + (( SPDK_TEST_NVME_PMR == 1 )) 00:02:00.287 + (( SPDK_TEST_FTL == 1 )) 00:02:00.287 + (( SPDK_TEST_NVME_FDP == 1 )) 00:02:00.287 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:02:00.287 + for nvme in "${!nvme_files[@]}" 00:02:00.287 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi2.img -s 4G 00:02:00.287 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:02:00.287 + for nvme in "${!nvme_files[@]}" 00:02:00.287 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-cmb.img -s 5G 00:02:00.287 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:02:00.287 + for nvme in "${!nvme_files[@]}" 00:02:00.287 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-openstack.img -s 8G 00:02:00.287 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:02:00.287 + for nvme in "${!nvme_files[@]}" 00:02:00.287 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-zns.img -s 5G 00:02:00.287 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:02:00.287 + for nvme in "${!nvme_files[@]}" 00:02:00.287 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi1.img -s 4G 00:02:00.287 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:02:00.287 + for nvme in "${!nvme_files[@]}" 00:02:00.287 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi0.img -s 4G 00:02:00.565 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:02:00.565 + for nvme in "${!nvme_files[@]}" 00:02:00.565 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme.img -s 5G 00:02:00.565 Formatting '/var/lib/libvirt/images/backends/ex7-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:02:00.565 ++ sudo grep -rl ex7-nvme.img /etc/libvirt/qemu 00:02:00.565 + echo 'End stage prepare_nvme.sh' 00:02:00.565 End stage prepare_nvme.sh 00:02:00.589 [Pipeline] sh 00:02:00.871 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:02:00.871 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex7-nvme.img -b /var/lib/libvirt/images/backends/ex7-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img -H -a -v -f fedora39 00:02:00.871 00:02:00.871 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:02:00.871 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:02:00.871 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:02:00.871 HELP=0 00:02:00.871 DRY_RUN=0 00:02:00.871 NVME_FILE=/var/lib/libvirt/images/backends/ex7-nvme.img,/var/lib/libvirt/images/backends/ex7-nvme-multi0.img, 00:02:00.871 NVME_DISKS_TYPE=nvme,nvme, 00:02:00.871 NVME_AUTO_CREATE=0 00:02:00.871 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img, 00:02:00.871 NVME_CMB=,, 00:02:00.871 NVME_PMR=,, 00:02:00.871 NVME_ZNS=,, 00:02:00.871 NVME_MS=,, 00:02:00.871 NVME_FDP=,, 00:02:00.871 SPDK_VAGRANT_DISTRO=fedora39 00:02:00.871 SPDK_VAGRANT_VMCPU=10 00:02:00.871 SPDK_VAGRANT_VMRAM=12288 00:02:00.871 SPDK_VAGRANT_PROVIDER=libvirt 00:02:00.871 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:02:00.871 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:02:00.871 SPDK_OPENSTACK_NETWORK=0 00:02:00.871 VAGRANT_PACKAGE_BOX=0 00:02:00.871 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:02:00.871 FORCE_DISTRO=true 00:02:00.871 VAGRANT_BOX_VERSION= 00:02:00.871 EXTRA_VAGRANTFILES= 00:02:00.871 NIC_MODEL=e1000 00:02:00.871 00:02:00.871 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt' 00:02:00.871 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:02:03.402 Bringing machine 'default' up with 'libvirt' provider... 00:02:04.776 ==> default: Creating image (snapshot of base box volume). 00:02:04.776 ==> default: Creating domain with the following settings... 00:02:04.776 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1730901991_568d9835fb8b11d16000 00:02:04.776 ==> default: -- Domain type: kvm 00:02:04.776 ==> default: -- Cpus: 10 00:02:04.776 ==> default: -- Feature: acpi 00:02:04.776 ==> default: -- Feature: apic 00:02:04.776 ==> default: -- Feature: pae 00:02:04.776 ==> default: -- Memory: 12288M 00:02:04.776 ==> default: -- Memory Backing: hugepages: 00:02:04.776 ==> default: -- Management MAC: 00:02:04.776 ==> default: -- Loader: 00:02:04.776 ==> default: -- Nvram: 00:02:04.776 ==> default: -- Base box: spdk/fedora39 00:02:04.776 ==> default: -- Storage pool: default 00:02:04.776 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1730901991_568d9835fb8b11d16000.img (20G) 00:02:04.776 ==> default: -- Volume Cache: default 00:02:04.776 ==> default: -- Kernel: 00:02:04.776 ==> default: -- Initrd: 00:02:04.776 ==> default: -- Graphics Type: vnc 00:02:04.776 ==> default: -- Graphics Port: -1 00:02:04.776 ==> default: -- Graphics IP: 127.0.0.1 00:02:04.776 ==> default: -- Graphics Password: Not defined 00:02:04.776 ==> default: -- Video Type: cirrus 00:02:04.776 ==> default: -- Video VRAM: 9216 00:02:04.776 ==> default: -- Sound Type: 00:02:04.776 ==> default: -- Keymap: en-us 00:02:04.776 ==> default: -- TPM Path: 00:02:04.776 ==> default: -- INPUT: type=mouse, bus=ps2 00:02:04.776 ==> default: -- Command line args: 00:02:04.776 ==> default: -> value=-device, 00:02:04.776 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:02:04.776 ==> default: -> value=-drive, 00:02:04.776 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme.img,if=none,id=nvme-0-drive0, 00:02:04.776 ==> default: -> value=-device, 00:02:04.776 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:04.776 ==> default: -> value=-device, 00:02:04.776 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:02:04.776 ==> default: -> value=-drive, 00:02:04.777 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:02:04.777 ==> default: -> value=-device, 00:02:04.777 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:04.777 ==> default: -> value=-drive, 00:02:04.777 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:02:04.777 ==> default: -> value=-device, 00:02:04.777 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:04.777 ==> default: -> value=-drive, 00:02:04.777 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:02:04.777 ==> default: -> value=-device, 00:02:04.777 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:05.343 ==> default: Creating shared folders metadata... 00:02:05.343 ==> default: Starting domain. 00:02:07.249 ==> default: Waiting for domain to get an IP address... 00:02:22.145 ==> default: Waiting for SSH to become available... 00:02:24.046 ==> default: Configuring and enabling network interfaces... 00:02:29.309 default: SSH address: 192.168.121.127:22 00:02:29.309 default: SSH username: vagrant 00:02:29.309 default: SSH auth method: private key 00:02:32.631 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:42.649 ==> default: Mounting SSHFS shared folder... 00:02:44.548 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:44.548 ==> default: Checking Mount.. 00:02:46.450 ==> default: Folder Successfully Mounted! 00:02:46.450 ==> default: Running provisioner: file... 00:02:47.383 default: ~/.gitconfig => .gitconfig 00:02:47.950 00:02:47.950 SUCCESS! 00:02:47.950 00:02:47.950 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:47.950 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:47.950 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:47.950 00:02:47.958 [Pipeline] } 00:02:47.973 [Pipeline] // stage 00:02:47.981 [Pipeline] dir 00:02:47.982 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt 00:02:47.983 [Pipeline] { 00:02:47.994 [Pipeline] catchError 00:02:47.996 [Pipeline] { 00:02:48.007 [Pipeline] sh 00:02:48.285 + vagrant ssh-config --host vagrant 00:02:48.285 + sed -ne /^Host/,$p 00:02:48.285 + tee ssh_conf 00:02:51.566 Host vagrant 00:02:51.566 HostName 192.168.121.127 00:02:51.566 User vagrant 00:02:51.566 Port 22 00:02:51.566 UserKnownHostsFile /dev/null 00:02:51.566 StrictHostKeyChecking no 00:02:51.566 PasswordAuthentication no 00:02:51.566 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:51.566 IdentitiesOnly yes 00:02:51.566 LogLevel FATAL 00:02:51.566 ForwardAgent yes 00:02:51.566 ForwardX11 yes 00:02:51.566 00:02:51.579 [Pipeline] withEnv 00:02:51.580 [Pipeline] { 00:02:51.593 [Pipeline] sh 00:02:51.873 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:51.873 source /etc/os-release 00:02:51.873 [[ -e /image.version ]] && img=$(< /image.version) 00:02:51.873 # Minimal, systemd-like check. 00:02:51.873 if [[ -e /.dockerenv ]]; then 00:02:51.873 # Clear garbage from the node's name: 00:02:51.873 # agt-er_autotest_547-896 -> autotest_547-896 00:02:51.873 # $HOSTNAME is the actual container id 00:02:51.873 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:51.873 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:51.873 # We can assume this is a mount from a host where container is running, 00:02:51.873 # so fetch its hostname to easily identify the target swarm worker. 00:02:51.873 container="$(< /etc/hostname) ($agent)" 00:02:51.873 else 00:02:51.873 # Fallback 00:02:51.873 container=$agent 00:02:51.873 fi 00:02:51.873 fi 00:02:51.873 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:51.873 00:02:52.142 [Pipeline] } 00:02:52.184 [Pipeline] // withEnv 00:02:52.219 [Pipeline] setCustomBuildProperty 00:02:52.229 [Pipeline] stage 00:02:52.230 [Pipeline] { (Tests) 00:02:52.240 [Pipeline] sh 00:02:52.530 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:52.800 [Pipeline] sh 00:02:53.076 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:53.343 [Pipeline] timeout 00:02:53.343 Timeout set to expire in 1 hr 0 min 00:02:53.344 [Pipeline] { 00:02:53.353 [Pipeline] sh 00:02:53.628 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:54.193 HEAD is now at d1c46ed8e lib/rdma_provider: Add API to check if accel seq supported 00:02:54.203 [Pipeline] sh 00:02:54.480 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:54.751 [Pipeline] sh 00:02:55.031 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:55.304 [Pipeline] sh 00:02:55.583 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:02:55.846 ++ readlink -f spdk_repo 00:02:55.846 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:55.846 + [[ -n /home/vagrant/spdk_repo ]] 00:02:55.846 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:55.846 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:55.846 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:55.846 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:55.846 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:55.846 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:02:55.846 + cd /home/vagrant/spdk_repo 00:02:55.846 + source /etc/os-release 00:02:55.846 ++ NAME='Fedora Linux' 00:02:55.846 ++ VERSION='39 (Cloud Edition)' 00:02:55.846 ++ ID=fedora 00:02:55.846 ++ VERSION_ID=39 00:02:55.846 ++ VERSION_CODENAME= 00:02:55.846 ++ PLATFORM_ID=platform:f39 00:02:55.846 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:55.846 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:55.846 ++ LOGO=fedora-logo-icon 00:02:55.846 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:55.846 ++ HOME_URL=https://fedoraproject.org/ 00:02:55.846 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:55.846 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:55.847 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:55.847 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:55.847 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:55.847 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:55.847 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:55.847 ++ SUPPORT_END=2024-11-12 00:02:55.847 ++ VARIANT='Cloud Edition' 00:02:55.847 ++ VARIANT_ID=cloud 00:02:55.847 + uname -a 00:02:55.847 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:55.847 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:56.416 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:56.416 Hugepages 00:02:56.416 node hugesize free / total 00:02:56.416 node0 1048576kB 0 / 0 00:02:56.416 node0 2048kB 0 / 0 00:02:56.416 00:02:56.416 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:56.416 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:56.416 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:56.416 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:56.416 + rm -f /tmp/spdk-ld-path 00:02:56.416 + source autorun-spdk.conf 00:02:56.416 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:56.416 ++ SPDK_TEST_NVMF=1 00:02:56.416 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:56.416 ++ SPDK_TEST_URING=1 00:02:56.416 ++ SPDK_TEST_VFIOUSER=1 00:02:56.416 ++ SPDK_TEST_USDT=1 00:02:56.416 ++ SPDK_RUN_ASAN=1 00:02:56.416 ++ SPDK_RUN_UBSAN=1 00:02:56.416 ++ NET_TYPE=virt 00:02:56.416 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:56.416 ++ RUN_NIGHTLY=1 00:02:56.416 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:56.416 + [[ -n '' ]] 00:02:56.416 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:56.674 + for M in /var/spdk/build-*-manifest.txt 00:02:56.674 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:56.674 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:56.674 + for M in /var/spdk/build-*-manifest.txt 00:02:56.674 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:56.674 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:56.674 + for M in /var/spdk/build-*-manifest.txt 00:02:56.674 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:56.674 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:56.674 ++ uname 00:02:56.674 + [[ Linux == \L\i\n\u\x ]] 00:02:56.674 + sudo dmesg -T 00:02:56.674 + sudo dmesg --clear 00:02:56.674 + dmesg_pid=5209 00:02:56.674 + [[ Fedora Linux == FreeBSD ]] 00:02:56.674 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:56.674 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:56.674 + sudo dmesg -Tw 00:02:56.675 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:56.675 + [[ -x /usr/src/fio-static/fio ]] 00:02:56.675 + export FIO_BIN=/usr/src/fio-static/fio 00:02:56.675 + FIO_BIN=/usr/src/fio-static/fio 00:02:56.675 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:56.675 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:56.675 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:56.675 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:56.675 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:56.675 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:56.675 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:56.675 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:56.675 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:56.940 14:07:24 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:02:56.940 14:07:24 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:56.940 14:07:24 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:56.940 14:07:24 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:02:56.940 14:07:24 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:56.940 14:07:24 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_URING=1 00:02:56.940 14:07:24 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_VFIOUSER=1 00:02:56.940 14:07:24 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_TEST_USDT=1 00:02:56.940 14:07:24 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_RUN_ASAN=1 00:02:56.940 14:07:24 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_RUN_UBSAN=1 00:02:56.940 14:07:24 -- spdk_repo/autorun-spdk.conf@9 -- $ NET_TYPE=virt 00:02:56.940 14:07:24 -- spdk_repo/autorun-spdk.conf@10 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:56.940 14:07:24 -- spdk_repo/autorun-spdk.conf@11 -- $ RUN_NIGHTLY=1 00:02:56.940 14:07:24 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:56.940 14:07:24 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:56.940 14:07:24 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:02:56.940 14:07:24 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:56.940 14:07:24 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:56.940 14:07:24 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:56.940 14:07:24 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:56.940 14:07:24 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:56.940 14:07:24 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:56.940 14:07:24 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:56.940 14:07:24 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:56.940 14:07:24 -- paths/export.sh@5 -- $ export PATH 00:02:56.940 14:07:24 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:56.940 14:07:24 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:56.940 14:07:24 -- common/autobuild_common.sh@486 -- $ date +%s 00:02:56.940 14:07:24 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1730902044.XXXXXX 00:02:56.940 14:07:24 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1730902044.ykm6BF 00:02:56.940 14:07:24 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:02:56.940 14:07:24 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:02:56.940 14:07:24 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:56.941 14:07:24 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:56.941 14:07:24 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:56.941 14:07:24 -- common/autobuild_common.sh@502 -- $ get_config_params 00:02:56.941 14:07:24 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:02:56.941 14:07:24 -- common/autotest_common.sh@10 -- $ set +x 00:02:56.941 14:07:24 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-vfio-user --with-uring' 00:02:56.941 14:07:24 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:02:56.941 14:07:24 -- pm/common@17 -- $ local monitor 00:02:56.941 14:07:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:56.941 14:07:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:56.941 14:07:24 -- pm/common@25 -- $ sleep 1 00:02:56.941 14:07:24 -- pm/common@21 -- $ date +%s 00:02:56.941 14:07:24 -- pm/common@21 -- $ date +%s 00:02:56.941 14:07:24 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1730902044 00:02:56.941 14:07:24 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1730902044 00:02:56.941 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1730902044_collect-vmstat.pm.log 00:02:56.941 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1730902044_collect-cpu-load.pm.log 00:02:57.878 14:07:25 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:02:57.878 14:07:25 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:57.878 14:07:25 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:57.878 14:07:25 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:57.878 14:07:25 -- spdk/autobuild.sh@16 -- $ date -u 00:02:57.878 Wed Nov 6 02:07:25 PM UTC 2024 00:02:57.878 14:07:25 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:58.138 v25.01-pre-170-gd1c46ed8e 00:02:58.138 14:07:25 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:58.138 14:07:25 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:58.138 14:07:25 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:02:58.138 14:07:25 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:02:58.138 14:07:25 -- common/autotest_common.sh@10 -- $ set +x 00:02:58.138 ************************************ 00:02:58.138 START TEST asan 00:02:58.138 ************************************ 00:02:58.138 using asan 00:02:58.138 14:07:25 asan -- common/autotest_common.sh@1127 -- $ echo 'using asan' 00:02:58.138 00:02:58.138 real 0m0.001s 00:02:58.138 user 0m0.000s 00:02:58.138 sys 0m0.000s 00:02:58.138 14:07:25 asan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:02:58.138 14:07:25 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:58.138 ************************************ 00:02:58.138 END TEST asan 00:02:58.138 ************************************ 00:02:58.138 14:07:25 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:58.138 14:07:25 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:58.138 14:07:25 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:02:58.138 14:07:25 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:02:58.138 14:07:25 -- common/autotest_common.sh@10 -- $ set +x 00:02:58.138 ************************************ 00:02:58.138 START TEST ubsan 00:02:58.138 ************************************ 00:02:58.138 using ubsan 00:02:58.138 14:07:25 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:02:58.138 00:02:58.138 real 0m0.000s 00:02:58.138 user 0m0.000s 00:02:58.138 sys 0m0.000s 00:02:58.138 14:07:25 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:02:58.138 14:07:25 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:58.138 ************************************ 00:02:58.138 END TEST ubsan 00:02:58.138 ************************************ 00:02:58.138 14:07:25 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:58.138 14:07:25 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:58.138 14:07:25 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:58.138 14:07:25 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:58.138 14:07:25 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:58.138 14:07:25 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:58.138 14:07:25 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:58.138 14:07:25 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:58.139 14:07:25 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-vfio-user --with-uring --with-shared 00:02:58.421 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:58.421 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:58.989 Using 'verbs' RDMA provider 00:03:18.037 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:32.925 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:32.925 Creating mk/config.mk...done. 00:03:32.925 Creating mk/cc.flags.mk...done. 00:03:32.925 Type 'make' to build. 00:03:32.925 14:07:59 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:32.925 14:07:59 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:03:32.925 14:07:59 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:03:32.925 14:07:59 -- common/autotest_common.sh@10 -- $ set +x 00:03:32.925 ************************************ 00:03:32.925 START TEST make 00:03:32.925 ************************************ 00:03:32.925 14:07:59 make -- common/autotest_common.sh@1127 -- $ make -j10 00:03:32.925 make[1]: Nothing to be done for 'all'. 00:03:34.298 The Meson build system 00:03:34.298 Version: 1.5.0 00:03:34.298 Source dir: /home/vagrant/spdk_repo/spdk/libvfio-user 00:03:34.298 Build dir: /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:03:34.298 Build type: native build 00:03:34.298 Project name: libvfio-user 00:03:34.298 Project version: 0.0.1 00:03:34.298 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:34.298 C linker for the host machine: cc ld.bfd 2.40-14 00:03:34.298 Host machine cpu family: x86_64 00:03:34.298 Host machine cpu: x86_64 00:03:34.298 Run-time dependency threads found: YES 00:03:34.298 Library dl found: YES 00:03:34.298 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:34.298 Run-time dependency json-c found: YES 0.17 00:03:34.298 Run-time dependency cmocka found: YES 1.1.7 00:03:34.298 Program pytest-3 found: NO 00:03:34.298 Program flake8 found: NO 00:03:34.298 Program misspell-fixer found: NO 00:03:34.298 Program restructuredtext-lint found: NO 00:03:34.298 Program valgrind found: YES (/usr/bin/valgrind) 00:03:34.298 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:34.298 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:34.298 Compiler for C supports arguments -Wwrite-strings: YES 00:03:34.298 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:34.298 Program test-lspci.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-lspci.sh) 00:03:34.298 Program test-linkage.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-linkage.sh) 00:03:34.298 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:34.298 Build targets in project: 8 00:03:34.298 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:34.298 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:34.298 00:03:34.298 libvfio-user 0.0.1 00:03:34.298 00:03:34.298 User defined options 00:03:34.298 buildtype : debug 00:03:34.298 default_library: shared 00:03:34.298 libdir : /usr/local/lib 00:03:34.298 00:03:34.298 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:34.555 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:03:34.813 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:34.813 [2/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:34.813 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:34.813 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:34.813 [5/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:34.813 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:34.813 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:34.813 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:34.813 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:34.813 [10/37] Compiling C object samples/client.p/client.c.o 00:03:34.813 [11/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:34.813 [12/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:34.813 [13/37] Compiling C object samples/server.p/server.c.o 00:03:34.813 [14/37] Compiling C object samples/null.p/null.c.o 00:03:34.813 [15/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:34.813 [16/37] Linking target samples/client 00:03:35.071 [17/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:35.071 [18/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:35.071 [19/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:35.071 [20/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:35.071 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:35.071 [22/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:35.071 [23/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:35.071 [24/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:35.071 [25/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:35.071 [26/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:35.071 [27/37] Linking target lib/libvfio-user.so.0.0.1 00:03:35.071 [28/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:35.071 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:35.071 [30/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:35.071 [31/37] Linking target test/unit_tests 00:03:35.330 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:35.330 [33/37] Linking target samples/null 00:03:35.330 [34/37] Linking target samples/server 00:03:35.330 [35/37] Linking target samples/lspci 00:03:35.330 [36/37] Linking target samples/shadow_ioeventfd_server 00:03:35.330 [37/37] Linking target samples/gpio-pci-idio-16 00:03:35.330 INFO: autodetecting backend as ninja 00:03:35.330 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:03:35.330 DESTDIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user meson install --quiet -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:03:35.896 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:03:35.896 ninja: no work to do. 00:03:45.880 The Meson build system 00:03:45.880 Version: 1.5.0 00:03:45.880 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:03:45.880 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:03:45.880 Build type: native build 00:03:45.880 Program cat found: YES (/usr/bin/cat) 00:03:45.880 Project name: DPDK 00:03:45.880 Project version: 24.03.0 00:03:45.880 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:45.880 C linker for the host machine: cc ld.bfd 2.40-14 00:03:45.880 Host machine cpu family: x86_64 00:03:45.880 Host machine cpu: x86_64 00:03:45.880 Message: ## Building in Developer Mode ## 00:03:45.880 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:45.880 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:03:45.880 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:45.880 Program python3 found: YES (/usr/bin/python3) 00:03:45.880 Program cat found: YES (/usr/bin/cat) 00:03:45.880 Compiler for C supports arguments -march=native: YES 00:03:45.880 Checking for size of "void *" : 8 00:03:45.880 Checking for size of "void *" : 8 (cached) 00:03:45.880 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:03:45.880 Library m found: YES 00:03:45.880 Library numa found: YES 00:03:45.880 Has header "numaif.h" : YES 00:03:45.880 Library fdt found: NO 00:03:45.880 Library execinfo found: NO 00:03:45.880 Has header "execinfo.h" : YES 00:03:45.880 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:45.880 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:45.880 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:45.880 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:45.880 Run-time dependency openssl found: YES 3.1.1 00:03:45.880 Run-time dependency libpcap found: YES 1.10.4 00:03:45.880 Has header "pcap.h" with dependency libpcap: YES 00:03:45.880 Compiler for C supports arguments -Wcast-qual: YES 00:03:45.880 Compiler for C supports arguments -Wdeprecated: YES 00:03:45.880 Compiler for C supports arguments -Wformat: YES 00:03:45.880 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:45.880 Compiler for C supports arguments -Wformat-security: NO 00:03:45.880 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:45.880 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:45.880 Compiler for C supports arguments -Wnested-externs: YES 00:03:45.880 Compiler for C supports arguments -Wold-style-definition: YES 00:03:45.880 Compiler for C supports arguments -Wpointer-arith: YES 00:03:45.880 Compiler for C supports arguments -Wsign-compare: YES 00:03:45.880 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:45.880 Compiler for C supports arguments -Wundef: YES 00:03:45.880 Compiler for C supports arguments -Wwrite-strings: YES 00:03:45.880 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:45.880 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:45.880 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:45.880 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:45.880 Program objdump found: YES (/usr/bin/objdump) 00:03:45.880 Compiler for C supports arguments -mavx512f: YES 00:03:45.880 Checking if "AVX512 checking" compiles: YES 00:03:45.880 Fetching value of define "__SSE4_2__" : 1 00:03:45.880 Fetching value of define "__AES__" : 1 00:03:45.880 Fetching value of define "__AVX__" : 1 00:03:45.880 Fetching value of define "__AVX2__" : 1 00:03:45.880 Fetching value of define "__AVX512BW__" : 1 00:03:45.880 Fetching value of define "__AVX512CD__" : 1 00:03:45.880 Fetching value of define "__AVX512DQ__" : 1 00:03:45.880 Fetching value of define "__AVX512F__" : 1 00:03:45.880 Fetching value of define "__AVX512VL__" : 1 00:03:45.880 Fetching value of define "__PCLMUL__" : 1 00:03:45.880 Fetching value of define "__RDRND__" : 1 00:03:45.880 Fetching value of define "__RDSEED__" : 1 00:03:45.880 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:45.880 Fetching value of define "__znver1__" : (undefined) 00:03:45.880 Fetching value of define "__znver2__" : (undefined) 00:03:45.880 Fetching value of define "__znver3__" : (undefined) 00:03:45.880 Fetching value of define "__znver4__" : (undefined) 00:03:45.881 Library asan found: YES 00:03:45.881 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:45.881 Message: lib/log: Defining dependency "log" 00:03:45.881 Message: lib/kvargs: Defining dependency "kvargs" 00:03:45.881 Message: lib/telemetry: Defining dependency "telemetry" 00:03:45.881 Library rt found: YES 00:03:45.881 Checking for function "getentropy" : NO 00:03:45.881 Message: lib/eal: Defining dependency "eal" 00:03:45.881 Message: lib/ring: Defining dependency "ring" 00:03:45.881 Message: lib/rcu: Defining dependency "rcu" 00:03:45.881 Message: lib/mempool: Defining dependency "mempool" 00:03:45.881 Message: lib/mbuf: Defining dependency "mbuf" 00:03:45.881 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:45.881 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:45.881 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:45.881 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:45.881 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:45.881 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:03:45.881 Compiler for C supports arguments -mpclmul: YES 00:03:45.881 Compiler for C supports arguments -maes: YES 00:03:45.881 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:45.881 Compiler for C supports arguments -mavx512bw: YES 00:03:45.881 Compiler for C supports arguments -mavx512dq: YES 00:03:45.881 Compiler for C supports arguments -mavx512vl: YES 00:03:45.881 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:45.881 Compiler for C supports arguments -mavx2: YES 00:03:45.881 Compiler for C supports arguments -mavx: YES 00:03:45.881 Message: lib/net: Defining dependency "net" 00:03:45.881 Message: lib/meter: Defining dependency "meter" 00:03:45.881 Message: lib/ethdev: Defining dependency "ethdev" 00:03:45.881 Message: lib/pci: Defining dependency "pci" 00:03:45.881 Message: lib/cmdline: Defining dependency "cmdline" 00:03:45.881 Message: lib/hash: Defining dependency "hash" 00:03:45.881 Message: lib/timer: Defining dependency "timer" 00:03:45.881 Message: lib/compressdev: Defining dependency "compressdev" 00:03:45.881 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:45.881 Message: lib/dmadev: Defining dependency "dmadev" 00:03:45.881 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:45.881 Message: lib/power: Defining dependency "power" 00:03:45.881 Message: lib/reorder: Defining dependency "reorder" 00:03:45.881 Message: lib/security: Defining dependency "security" 00:03:45.881 Has header "linux/userfaultfd.h" : YES 00:03:45.881 Has header "linux/vduse.h" : YES 00:03:45.881 Message: lib/vhost: Defining dependency "vhost" 00:03:45.881 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:45.881 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:45.881 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:45.881 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:45.881 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:45.881 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:45.881 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:45.881 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:45.881 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:45.881 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:45.881 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:45.881 Configuring doxy-api-html.conf using configuration 00:03:45.881 Configuring doxy-api-man.conf using configuration 00:03:45.881 Program mandb found: YES (/usr/bin/mandb) 00:03:45.881 Program sphinx-build found: NO 00:03:45.881 Configuring rte_build_config.h using configuration 00:03:45.881 Message: 00:03:45.881 ================= 00:03:45.881 Applications Enabled 00:03:45.881 ================= 00:03:45.881 00:03:45.881 apps: 00:03:45.881 00:03:45.881 00:03:45.881 Message: 00:03:45.881 ================= 00:03:45.881 Libraries Enabled 00:03:45.881 ================= 00:03:45.881 00:03:45.881 libs: 00:03:45.881 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:45.881 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:45.881 cryptodev, dmadev, power, reorder, security, vhost, 00:03:45.881 00:03:45.881 Message: 00:03:45.881 =============== 00:03:45.881 Drivers Enabled 00:03:45.881 =============== 00:03:45.881 00:03:45.881 common: 00:03:45.881 00:03:45.881 bus: 00:03:45.881 pci, vdev, 00:03:45.881 mempool: 00:03:45.881 ring, 00:03:45.881 dma: 00:03:45.881 00:03:45.881 net: 00:03:45.881 00:03:45.881 crypto: 00:03:45.881 00:03:45.881 compress: 00:03:45.881 00:03:45.881 vdpa: 00:03:45.881 00:03:45.881 00:03:45.881 Message: 00:03:45.881 ================= 00:03:45.881 Content Skipped 00:03:45.881 ================= 00:03:45.881 00:03:45.881 apps: 00:03:45.881 dumpcap: explicitly disabled via build config 00:03:45.881 graph: explicitly disabled via build config 00:03:45.881 pdump: explicitly disabled via build config 00:03:45.881 proc-info: explicitly disabled via build config 00:03:45.881 test-acl: explicitly disabled via build config 00:03:45.881 test-bbdev: explicitly disabled via build config 00:03:45.881 test-cmdline: explicitly disabled via build config 00:03:45.881 test-compress-perf: explicitly disabled via build config 00:03:45.881 test-crypto-perf: explicitly disabled via build config 00:03:45.881 test-dma-perf: explicitly disabled via build config 00:03:45.881 test-eventdev: explicitly disabled via build config 00:03:45.881 test-fib: explicitly disabled via build config 00:03:45.881 test-flow-perf: explicitly disabled via build config 00:03:45.881 test-gpudev: explicitly disabled via build config 00:03:45.881 test-mldev: explicitly disabled via build config 00:03:45.881 test-pipeline: explicitly disabled via build config 00:03:45.881 test-pmd: explicitly disabled via build config 00:03:45.881 test-regex: explicitly disabled via build config 00:03:45.881 test-sad: explicitly disabled via build config 00:03:45.881 test-security-perf: explicitly disabled via build config 00:03:45.881 00:03:45.881 libs: 00:03:45.881 argparse: explicitly disabled via build config 00:03:45.881 metrics: explicitly disabled via build config 00:03:45.881 acl: explicitly disabled via build config 00:03:45.881 bbdev: explicitly disabled via build config 00:03:45.881 bitratestats: explicitly disabled via build config 00:03:45.881 bpf: explicitly disabled via build config 00:03:45.881 cfgfile: explicitly disabled via build config 00:03:45.881 distributor: explicitly disabled via build config 00:03:45.881 efd: explicitly disabled via build config 00:03:45.881 eventdev: explicitly disabled via build config 00:03:45.881 dispatcher: explicitly disabled via build config 00:03:45.881 gpudev: explicitly disabled via build config 00:03:45.881 gro: explicitly disabled via build config 00:03:45.881 gso: explicitly disabled via build config 00:03:45.881 ip_frag: explicitly disabled via build config 00:03:45.881 jobstats: explicitly disabled via build config 00:03:45.881 latencystats: explicitly disabled via build config 00:03:45.881 lpm: explicitly disabled via build config 00:03:45.881 member: explicitly disabled via build config 00:03:45.881 pcapng: explicitly disabled via build config 00:03:45.881 rawdev: explicitly disabled via build config 00:03:45.881 regexdev: explicitly disabled via build config 00:03:45.881 mldev: explicitly disabled via build config 00:03:45.881 rib: explicitly disabled via build config 00:03:45.881 sched: explicitly disabled via build config 00:03:45.881 stack: explicitly disabled via build config 00:03:45.881 ipsec: explicitly disabled via build config 00:03:45.881 pdcp: explicitly disabled via build config 00:03:45.881 fib: explicitly disabled via build config 00:03:45.881 port: explicitly disabled via build config 00:03:45.881 pdump: explicitly disabled via build config 00:03:45.881 table: explicitly disabled via build config 00:03:45.881 pipeline: explicitly disabled via build config 00:03:45.881 graph: explicitly disabled via build config 00:03:45.881 node: explicitly disabled via build config 00:03:45.881 00:03:45.881 drivers: 00:03:45.881 common/cpt: not in enabled drivers build config 00:03:45.881 common/dpaax: not in enabled drivers build config 00:03:45.881 common/iavf: not in enabled drivers build config 00:03:45.881 common/idpf: not in enabled drivers build config 00:03:45.881 common/ionic: not in enabled drivers build config 00:03:45.881 common/mvep: not in enabled drivers build config 00:03:45.881 common/octeontx: not in enabled drivers build config 00:03:45.881 bus/auxiliary: not in enabled drivers build config 00:03:45.881 bus/cdx: not in enabled drivers build config 00:03:45.881 bus/dpaa: not in enabled drivers build config 00:03:45.881 bus/fslmc: not in enabled drivers build config 00:03:45.881 bus/ifpga: not in enabled drivers build config 00:03:45.881 bus/platform: not in enabled drivers build config 00:03:45.881 bus/uacce: not in enabled drivers build config 00:03:45.881 bus/vmbus: not in enabled drivers build config 00:03:45.881 common/cnxk: not in enabled drivers build config 00:03:45.881 common/mlx5: not in enabled drivers build config 00:03:45.881 common/nfp: not in enabled drivers build config 00:03:45.881 common/nitrox: not in enabled drivers build config 00:03:45.881 common/qat: not in enabled drivers build config 00:03:45.881 common/sfc_efx: not in enabled drivers build config 00:03:45.881 mempool/bucket: not in enabled drivers build config 00:03:45.881 mempool/cnxk: not in enabled drivers build config 00:03:45.881 mempool/dpaa: not in enabled drivers build config 00:03:45.881 mempool/dpaa2: not in enabled drivers build config 00:03:45.881 mempool/octeontx: not in enabled drivers build config 00:03:45.881 mempool/stack: not in enabled drivers build config 00:03:45.881 dma/cnxk: not in enabled drivers build config 00:03:45.881 dma/dpaa: not in enabled drivers build config 00:03:45.881 dma/dpaa2: not in enabled drivers build config 00:03:45.881 dma/hisilicon: not in enabled drivers build config 00:03:45.881 dma/idxd: not in enabled drivers build config 00:03:45.881 dma/ioat: not in enabled drivers build config 00:03:45.881 dma/skeleton: not in enabled drivers build config 00:03:45.881 net/af_packet: not in enabled drivers build config 00:03:45.881 net/af_xdp: not in enabled drivers build config 00:03:45.881 net/ark: not in enabled drivers build config 00:03:45.881 net/atlantic: not in enabled drivers build config 00:03:45.881 net/avp: not in enabled drivers build config 00:03:45.881 net/axgbe: not in enabled drivers build config 00:03:45.881 net/bnx2x: not in enabled drivers build config 00:03:45.882 net/bnxt: not in enabled drivers build config 00:03:45.882 net/bonding: not in enabled drivers build config 00:03:45.882 net/cnxk: not in enabled drivers build config 00:03:45.882 net/cpfl: not in enabled drivers build config 00:03:45.882 net/cxgbe: not in enabled drivers build config 00:03:45.882 net/dpaa: not in enabled drivers build config 00:03:45.882 net/dpaa2: not in enabled drivers build config 00:03:45.882 net/e1000: not in enabled drivers build config 00:03:45.882 net/ena: not in enabled drivers build config 00:03:45.882 net/enetc: not in enabled drivers build config 00:03:45.882 net/enetfec: not in enabled drivers build config 00:03:45.882 net/enic: not in enabled drivers build config 00:03:45.882 net/failsafe: not in enabled drivers build config 00:03:45.882 net/fm10k: not in enabled drivers build config 00:03:45.882 net/gve: not in enabled drivers build config 00:03:45.882 net/hinic: not in enabled drivers build config 00:03:45.882 net/hns3: not in enabled drivers build config 00:03:45.882 net/i40e: not in enabled drivers build config 00:03:45.882 net/iavf: not in enabled drivers build config 00:03:45.882 net/ice: not in enabled drivers build config 00:03:45.882 net/idpf: not in enabled drivers build config 00:03:45.882 net/igc: not in enabled drivers build config 00:03:45.882 net/ionic: not in enabled drivers build config 00:03:45.882 net/ipn3ke: not in enabled drivers build config 00:03:45.882 net/ixgbe: not in enabled drivers build config 00:03:45.882 net/mana: not in enabled drivers build config 00:03:45.882 net/memif: not in enabled drivers build config 00:03:45.882 net/mlx4: not in enabled drivers build config 00:03:45.882 net/mlx5: not in enabled drivers build config 00:03:45.882 net/mvneta: not in enabled drivers build config 00:03:45.882 net/mvpp2: not in enabled drivers build config 00:03:45.882 net/netvsc: not in enabled drivers build config 00:03:45.882 net/nfb: not in enabled drivers build config 00:03:45.882 net/nfp: not in enabled drivers build config 00:03:45.882 net/ngbe: not in enabled drivers build config 00:03:45.882 net/null: not in enabled drivers build config 00:03:45.882 net/octeontx: not in enabled drivers build config 00:03:45.882 net/octeon_ep: not in enabled drivers build config 00:03:45.882 net/pcap: not in enabled drivers build config 00:03:45.882 net/pfe: not in enabled drivers build config 00:03:45.882 net/qede: not in enabled drivers build config 00:03:45.882 net/ring: not in enabled drivers build config 00:03:45.882 net/sfc: not in enabled drivers build config 00:03:45.882 net/softnic: not in enabled drivers build config 00:03:45.882 net/tap: not in enabled drivers build config 00:03:45.882 net/thunderx: not in enabled drivers build config 00:03:45.882 net/txgbe: not in enabled drivers build config 00:03:45.882 net/vdev_netvsc: not in enabled drivers build config 00:03:45.882 net/vhost: not in enabled drivers build config 00:03:45.882 net/virtio: not in enabled drivers build config 00:03:45.882 net/vmxnet3: not in enabled drivers build config 00:03:45.882 raw/*: missing internal dependency, "rawdev" 00:03:45.882 crypto/armv8: not in enabled drivers build config 00:03:45.882 crypto/bcmfs: not in enabled drivers build config 00:03:45.882 crypto/caam_jr: not in enabled drivers build config 00:03:45.882 crypto/ccp: not in enabled drivers build config 00:03:45.882 crypto/cnxk: not in enabled drivers build config 00:03:45.882 crypto/dpaa_sec: not in enabled drivers build config 00:03:45.882 crypto/dpaa2_sec: not in enabled drivers build config 00:03:45.882 crypto/ipsec_mb: not in enabled drivers build config 00:03:45.882 crypto/mlx5: not in enabled drivers build config 00:03:45.882 crypto/mvsam: not in enabled drivers build config 00:03:45.882 crypto/nitrox: not in enabled drivers build config 00:03:45.882 crypto/null: not in enabled drivers build config 00:03:45.882 crypto/octeontx: not in enabled drivers build config 00:03:45.882 crypto/openssl: not in enabled drivers build config 00:03:45.882 crypto/scheduler: not in enabled drivers build config 00:03:45.882 crypto/uadk: not in enabled drivers build config 00:03:45.882 crypto/virtio: not in enabled drivers build config 00:03:45.882 compress/isal: not in enabled drivers build config 00:03:45.882 compress/mlx5: not in enabled drivers build config 00:03:45.882 compress/nitrox: not in enabled drivers build config 00:03:45.882 compress/octeontx: not in enabled drivers build config 00:03:45.882 compress/zlib: not in enabled drivers build config 00:03:45.882 regex/*: missing internal dependency, "regexdev" 00:03:45.882 ml/*: missing internal dependency, "mldev" 00:03:45.882 vdpa/ifc: not in enabled drivers build config 00:03:45.882 vdpa/mlx5: not in enabled drivers build config 00:03:45.882 vdpa/nfp: not in enabled drivers build config 00:03:45.882 vdpa/sfc: not in enabled drivers build config 00:03:45.882 event/*: missing internal dependency, "eventdev" 00:03:45.882 baseband/*: missing internal dependency, "bbdev" 00:03:45.882 gpu/*: missing internal dependency, "gpudev" 00:03:45.882 00:03:45.882 00:03:45.882 Build targets in project: 85 00:03:45.882 00:03:45.882 DPDK 24.03.0 00:03:45.882 00:03:45.882 User defined options 00:03:45.882 buildtype : debug 00:03:45.882 default_library : shared 00:03:45.882 libdir : lib 00:03:45.882 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:45.882 b_sanitize : address 00:03:45.882 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:45.882 c_link_args : 00:03:45.882 cpu_instruction_set: native 00:03:45.882 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:03:45.882 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:03:45.882 enable_docs : false 00:03:45.882 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:03:45.882 enable_kmods : false 00:03:45.882 max_lcores : 128 00:03:45.882 tests : false 00:03:45.882 00:03:45.882 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:45.882 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:03:45.882 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:45.882 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:45.882 [3/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:45.882 [4/268] Linking static target lib/librte_kvargs.a 00:03:45.882 [5/268] Linking static target lib/librte_log.a 00:03:45.882 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:46.141 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:46.141 [8/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:46.410 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:46.410 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:46.410 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:46.410 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:46.410 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:46.410 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:46.410 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:46.410 [16/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:46.410 [17/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:46.698 [18/268] Linking static target lib/librte_telemetry.a 00:03:46.956 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:46.956 [20/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:46.956 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:46.956 [22/268] Linking target lib/librte_log.so.24.1 00:03:47.215 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:47.215 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:47.215 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:47.215 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:47.215 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:47.215 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:47.215 [29/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:47.215 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:47.215 [31/268] Linking target lib/librte_kvargs.so.24.1 00:03:47.215 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:47.474 [33/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:47.732 [34/268] Linking target lib/librte_telemetry.so.24.1 00:03:47.732 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:47.732 [36/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:47.732 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:47.732 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:47.732 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:47.991 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:47.991 [41/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:47.991 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:47.991 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:47.991 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:47.991 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:48.249 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:48.249 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:48.249 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:48.249 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:48.507 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:48.507 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:48.507 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:48.507 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:48.783 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:48.783 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:48.783 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:48.783 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:48.783 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:49.041 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:49.041 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:49.041 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:49.041 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:49.041 [63/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:49.299 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:49.300 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:49.300 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:49.300 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:49.559 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:49.559 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:49.559 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:49.818 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:49.818 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:49.818 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:49.818 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:49.818 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:49.818 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:49.818 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:50.077 [78/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:50.077 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:50.077 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:50.334 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:50.335 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:50.335 [83/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:50.335 [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:50.335 [85/268] Linking static target lib/librte_ring.a 00:03:50.335 [86/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:50.592 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:50.851 [88/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:50.851 [89/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:50.851 [90/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:50.851 [91/268] Linking static target lib/librte_eal.a 00:03:50.851 [92/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:50.851 [93/268] Linking static target lib/librte_rcu.a 00:03:50.851 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:50.851 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:50.851 [96/268] Linking static target lib/librte_mempool.a 00:03:51.118 [97/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:51.118 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:51.118 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:51.377 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:51.377 [101/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:51.377 [102/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:51.377 [103/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:51.640 [104/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:51.640 [105/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:51.640 [106/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:51.640 [107/268] Linking static target lib/librte_meter.a 00:03:51.640 [108/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:51.640 [109/268] Linking static target lib/librte_mbuf.a 00:03:51.640 [110/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:51.898 [111/268] Linking static target lib/librte_net.a 00:03:51.898 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:51.898 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:51.898 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:52.157 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:52.157 [116/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:52.416 [117/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:52.416 [118/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:52.416 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:52.676 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:52.676 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:52.947 [122/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:52.947 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:53.204 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:53.204 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:53.204 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:53.204 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:53.204 [128/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:53.204 [129/268] Linking static target lib/librte_pci.a 00:03:53.460 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:53.460 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:53.460 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:53.460 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:53.460 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:53.718 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:53.718 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:53.718 [137/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:53.718 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:53.718 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:53.718 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:53.718 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:53.718 [142/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:53.976 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:53.976 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:53.976 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:54.236 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:54.236 [147/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:54.495 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:54.495 [149/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:54.495 [150/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:54.495 [151/268] Linking static target lib/librte_cmdline.a 00:03:54.495 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:54.495 [153/268] Linking static target lib/librte_timer.a 00:03:54.495 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:54.754 [155/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:54.755 [156/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:55.013 [157/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:55.013 [158/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:55.013 [159/268] Linking static target lib/librte_ethdev.a 00:03:55.013 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:55.013 [161/268] Linking static target lib/librte_compressdev.a 00:03:55.013 [162/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:55.309 [163/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:55.309 [164/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:55.309 [165/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:55.309 [166/268] Linking static target lib/librte_hash.a 00:03:55.309 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:55.309 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:55.309 [169/268] Linking static target lib/librte_dmadev.a 00:03:55.568 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:55.568 [171/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:55.568 [172/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:55.827 [173/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:56.085 [174/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:56.085 [175/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:56.085 [176/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:56.343 [177/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:56.343 [178/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:56.343 [179/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:56.343 [180/268] Linking static target lib/librte_cryptodev.a 00:03:56.343 [181/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:56.343 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:56.601 [183/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:56.601 [184/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:56.858 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:56.859 [186/268] Linking static target lib/librte_power.a 00:03:56.859 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:56.859 [188/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:56.859 [189/268] Linking static target lib/librte_reorder.a 00:03:57.117 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:57.117 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:57.117 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:57.117 [193/268] Linking static target lib/librte_security.a 00:03:57.687 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:57.687 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:57.946 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:58.376 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:58.376 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:58.376 [199/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:58.376 [200/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:58.637 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:58.637 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:58.637 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:58.637 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:58.637 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:58.896 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:59.155 [207/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:59.155 [208/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:59.155 [209/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:59.155 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:59.155 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:59.413 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:59.413 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:59.413 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:59.413 [215/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:59.413 [216/268] Linking static target drivers/librte_bus_vdev.a 00:03:59.413 [217/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:59.413 [218/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:59.413 [219/268] Linking static target drivers/librte_bus_pci.a 00:03:59.413 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:59.413 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:59.673 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:59.673 [223/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:59.673 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:59.673 [225/268] Linking static target drivers/librte_mempool_ring.a 00:03:59.673 [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:00.241 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:00.809 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:04:05.062 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:04:05.062 [230/268] Linking target lib/librte_eal.so.24.1 00:04:05.062 [231/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:04:05.062 [232/268] Linking static target lib/librte_vhost.a 00:04:05.062 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:04:05.062 [234/268] Linking target lib/librte_ring.so.24.1 00:04:05.062 [235/268] Linking target lib/librte_meter.so.24.1 00:04:05.062 [236/268] Linking target lib/librte_timer.so.24.1 00:04:05.062 [237/268] Linking target lib/librte_pci.so.24.1 00:04:05.062 [238/268] Linking target drivers/librte_bus_vdev.so.24.1 00:04:05.062 [239/268] Linking target lib/librte_dmadev.so.24.1 00:04:05.062 [240/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:04:05.062 [241/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:04:05.062 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:04:05.062 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:04:05.062 [244/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:04:05.062 [245/268] Linking target lib/librte_mempool.so.24.1 00:04:05.062 [246/268] Linking target lib/librte_rcu.so.24.1 00:04:05.062 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:04:05.062 [248/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:04:05.062 [249/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:04:05.062 [250/268] Linking target lib/librte_mbuf.so.24.1 00:04:05.062 [251/268] Linking target drivers/librte_mempool_ring.so.24.1 00:04:05.321 [252/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:04:05.321 [253/268] Linking target lib/librte_reorder.so.24.1 00:04:05.321 [254/268] Linking target lib/librte_compressdev.so.24.1 00:04:05.321 [255/268] Linking target lib/librte_net.so.24.1 00:04:05.321 [256/268] Linking target lib/librte_cryptodev.so.24.1 00:04:05.321 [257/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:05.580 [258/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:04:05.580 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:04:05.580 [260/268] Linking target lib/librte_security.so.24.1 00:04:05.580 [261/268] Linking target lib/librte_hash.so.24.1 00:04:05.580 [262/268] Linking target lib/librte_cmdline.so.24.1 00:04:05.580 [263/268] Linking target lib/librte_ethdev.so.24.1 00:04:05.840 [264/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:04:05.840 [265/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:04:05.840 [266/268] Linking target lib/librte_power.so.24.1 00:04:06.784 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:04:07.043 [268/268] Linking target lib/librte_vhost.so.24.1 00:04:07.043 INFO: autodetecting backend as ninja 00:04:07.043 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:04:28.974 CC lib/ut/ut.o 00:04:28.974 CC lib/ut_mock/mock.o 00:04:28.974 CC lib/log/log.o 00:04:28.974 CC lib/log/log_flags.o 00:04:28.974 CC lib/log/log_deprecated.o 00:04:28.974 LIB libspdk_ut.a 00:04:28.974 LIB libspdk_ut_mock.a 00:04:28.974 LIB libspdk_log.a 00:04:28.974 SO libspdk_ut.so.2.0 00:04:28.974 SO libspdk_ut_mock.so.6.0 00:04:28.974 SO libspdk_log.so.7.1 00:04:28.974 SYMLINK libspdk_ut.so 00:04:28.974 SYMLINK libspdk_ut_mock.so 00:04:28.974 SYMLINK libspdk_log.so 00:04:28.974 CXX lib/trace_parser/trace.o 00:04:28.974 CC lib/dma/dma.o 00:04:28.974 CC lib/util/base64.o 00:04:28.974 CC lib/util/bit_array.o 00:04:28.974 CC lib/ioat/ioat.o 00:04:28.974 CC lib/util/crc32c.o 00:04:28.974 CC lib/util/cpuset.o 00:04:28.974 CC lib/util/crc32.o 00:04:28.974 CC lib/util/crc16.o 00:04:28.974 CC lib/vfio_user/host/vfio_user_pci.o 00:04:28.974 CC lib/util/crc32_ieee.o 00:04:28.974 CC lib/vfio_user/host/vfio_user.o 00:04:28.974 CC lib/util/crc64.o 00:04:28.974 CC lib/util/dif.o 00:04:28.974 LIB libspdk_dma.a 00:04:28.974 CC lib/util/fd.o 00:04:28.974 SO libspdk_dma.so.5.0 00:04:28.974 CC lib/util/fd_group.o 00:04:28.974 CC lib/util/file.o 00:04:28.974 CC lib/util/hexlify.o 00:04:28.974 SYMLINK libspdk_dma.so 00:04:28.974 CC lib/util/iov.o 00:04:28.974 LIB libspdk_ioat.a 00:04:28.974 SO libspdk_ioat.so.7.0 00:04:28.974 CC lib/util/math.o 00:04:28.974 CC lib/util/net.o 00:04:28.974 LIB libspdk_vfio_user.a 00:04:28.974 SYMLINK libspdk_ioat.so 00:04:28.974 CC lib/util/pipe.o 00:04:28.974 SO libspdk_vfio_user.so.5.0 00:04:28.974 CC lib/util/strerror_tls.o 00:04:28.974 CC lib/util/string.o 00:04:28.974 SYMLINK libspdk_vfio_user.so 00:04:28.974 CC lib/util/uuid.o 00:04:28.974 CC lib/util/xor.o 00:04:28.974 CC lib/util/zipf.o 00:04:28.974 CC lib/util/md5.o 00:04:28.974 LIB libspdk_util.a 00:04:29.234 SO libspdk_util.so.10.1 00:04:29.234 LIB libspdk_trace_parser.a 00:04:29.234 SO libspdk_trace_parser.so.6.0 00:04:29.493 SYMLINK libspdk_util.so 00:04:29.493 SYMLINK libspdk_trace_parser.so 00:04:29.752 CC lib/conf/conf.o 00:04:29.752 CC lib/vmd/led.o 00:04:29.752 CC lib/vmd/vmd.o 00:04:29.752 CC lib/env_dpdk/env.o 00:04:29.752 CC lib/env_dpdk/pci.o 00:04:29.752 CC lib/env_dpdk/memory.o 00:04:29.752 CC lib/env_dpdk/init.o 00:04:29.752 CC lib/idxd/idxd.o 00:04:29.752 CC lib/json/json_parse.o 00:04:29.752 CC lib/rdma_utils/rdma_utils.o 00:04:29.752 CC lib/env_dpdk/threads.o 00:04:30.010 LIB libspdk_conf.a 00:04:30.010 CC lib/json/json_util.o 00:04:30.010 SO libspdk_conf.so.6.0 00:04:30.010 CC lib/env_dpdk/pci_ioat.o 00:04:30.010 LIB libspdk_rdma_utils.a 00:04:30.010 SYMLINK libspdk_conf.so 00:04:30.010 CC lib/env_dpdk/pci_virtio.o 00:04:30.010 SO libspdk_rdma_utils.so.1.0 00:04:30.010 CC lib/env_dpdk/pci_vmd.o 00:04:30.010 CC lib/env_dpdk/pci_idxd.o 00:04:30.010 SYMLINK libspdk_rdma_utils.so 00:04:30.010 CC lib/env_dpdk/pci_event.o 00:04:30.268 CC lib/env_dpdk/sigbus_handler.o 00:04:30.268 CC lib/env_dpdk/pci_dpdk.o 00:04:30.268 CC lib/json/json_write.o 00:04:30.268 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:30.268 CC lib/idxd/idxd_user.o 00:04:30.268 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:30.268 CC lib/idxd/idxd_kernel.o 00:04:30.525 LIB libspdk_vmd.a 00:04:30.525 SO libspdk_vmd.so.6.0 00:04:30.525 LIB libspdk_idxd.a 00:04:30.525 SYMLINK libspdk_vmd.so 00:04:30.525 LIB libspdk_json.a 00:04:30.525 CC lib/rdma_provider/common.o 00:04:30.525 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:30.525 SO libspdk_idxd.so.12.1 00:04:30.783 SO libspdk_json.so.6.0 00:04:30.783 SYMLINK libspdk_idxd.so 00:04:30.783 SYMLINK libspdk_json.so 00:04:30.783 LIB libspdk_rdma_provider.a 00:04:31.041 SO libspdk_rdma_provider.so.7.0 00:04:31.041 SYMLINK libspdk_rdma_provider.so 00:04:31.041 CC lib/jsonrpc/jsonrpc_server.o 00:04:31.041 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:31.041 CC lib/jsonrpc/jsonrpc_client.o 00:04:31.041 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:31.299 LIB libspdk_env_dpdk.a 00:04:31.557 LIB libspdk_jsonrpc.a 00:04:31.557 SO libspdk_jsonrpc.so.6.0 00:04:31.557 SO libspdk_env_dpdk.so.15.1 00:04:31.557 SYMLINK libspdk_jsonrpc.so 00:04:31.816 SYMLINK libspdk_env_dpdk.so 00:04:32.076 CC lib/rpc/rpc.o 00:04:32.336 LIB libspdk_rpc.a 00:04:32.336 SO libspdk_rpc.so.6.0 00:04:32.595 SYMLINK libspdk_rpc.so 00:04:32.854 CC lib/trace/trace.o 00:04:32.854 CC lib/trace/trace_flags.o 00:04:32.854 CC lib/trace/trace_rpc.o 00:04:32.854 CC lib/notify/notify.o 00:04:32.854 CC lib/notify/notify_rpc.o 00:04:32.854 CC lib/keyring/keyring.o 00:04:32.854 CC lib/keyring/keyring_rpc.o 00:04:33.111 LIB libspdk_notify.a 00:04:33.111 SO libspdk_notify.so.6.0 00:04:33.111 LIB libspdk_trace.a 00:04:33.111 LIB libspdk_keyring.a 00:04:33.111 SYMLINK libspdk_notify.so 00:04:33.370 SO libspdk_keyring.so.2.0 00:04:33.370 SO libspdk_trace.so.11.0 00:04:33.370 SYMLINK libspdk_keyring.so 00:04:33.370 SYMLINK libspdk_trace.so 00:04:33.629 CC lib/sock/sock_rpc.o 00:04:33.629 CC lib/sock/sock.o 00:04:33.629 CC lib/thread/thread.o 00:04:33.888 CC lib/thread/iobuf.o 00:04:34.147 LIB libspdk_sock.a 00:04:34.406 SO libspdk_sock.so.10.0 00:04:34.406 SYMLINK libspdk_sock.so 00:04:34.973 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:34.973 CC lib/nvme/nvme_ctrlr.o 00:04:34.973 CC lib/nvme/nvme_fabric.o 00:04:34.973 CC lib/nvme/nvme_ns_cmd.o 00:04:34.973 CC lib/nvme/nvme_ns.o 00:04:34.973 CC lib/nvme/nvme.o 00:04:34.973 CC lib/nvme/nvme_pcie_common.o 00:04:34.973 CC lib/nvme/nvme_pcie.o 00:04:34.973 CC lib/nvme/nvme_qpair.o 00:04:35.540 CC lib/nvme/nvme_quirks.o 00:04:35.540 LIB libspdk_thread.a 00:04:35.540 CC lib/nvme/nvme_transport.o 00:04:35.540 SO libspdk_thread.so.11.0 00:04:35.540 CC lib/nvme/nvme_discovery.o 00:04:35.540 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:35.540 SYMLINK libspdk_thread.so 00:04:35.540 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:35.799 CC lib/nvme/nvme_tcp.o 00:04:35.799 CC lib/nvme/nvme_opal.o 00:04:35.799 CC lib/nvme/nvme_io_msg.o 00:04:36.057 CC lib/nvme/nvme_poll_group.o 00:04:36.058 CC lib/accel/accel.o 00:04:36.058 CC lib/accel/accel_rpc.o 00:04:36.058 CC lib/accel/accel_sw.o 00:04:36.315 CC lib/nvme/nvme_zns.o 00:04:36.315 CC lib/nvme/nvme_stubs.o 00:04:36.315 CC lib/nvme/nvme_auth.o 00:04:36.573 CC lib/nvme/nvme_cuse.o 00:04:36.573 CC lib/blob/blobstore.o 00:04:36.858 CC lib/init/json_config.o 00:04:36.858 CC lib/init/subsystem.o 00:04:37.167 CC lib/init/subsystem_rpc.o 00:04:37.167 CC lib/init/rpc.o 00:04:37.167 CC lib/virtio/virtio.o 00:04:37.167 CC lib/virtio/virtio_vhost_user.o 00:04:37.448 CC lib/virtio/virtio_vfio_user.o 00:04:37.448 LIB libspdk_init.a 00:04:37.448 SO libspdk_init.so.6.0 00:04:37.706 SYMLINK libspdk_init.so 00:04:37.706 CC lib/nvme/nvme_vfio_user.o 00:04:37.706 CC lib/virtio/virtio_pci.o 00:04:37.706 CC lib/nvme/nvme_rdma.o 00:04:37.706 CC lib/blob/request.o 00:04:37.962 CC lib/blob/zeroes.o 00:04:37.962 LIB libspdk_accel.a 00:04:37.962 CC lib/blob/blob_bs_dev.o 00:04:37.962 LIB libspdk_virtio.a 00:04:37.962 SO libspdk_accel.so.16.0 00:04:37.962 SO libspdk_virtio.so.7.0 00:04:38.220 SYMLINK libspdk_virtio.so 00:04:38.220 SYMLINK libspdk_accel.so 00:04:38.477 CC lib/vfu_tgt/tgt_endpoint.o 00:04:38.477 CC lib/vfu_tgt/tgt_rpc.o 00:04:38.477 CC lib/bdev/bdev.o 00:04:38.477 CC lib/bdev/bdev_zone.o 00:04:38.477 CC lib/bdev/bdev_rpc.o 00:04:38.477 CC lib/event/app.o 00:04:38.477 CC lib/fsdev/fsdev.o 00:04:38.477 CC lib/fsdev/fsdev_io.o 00:04:38.735 CC lib/bdev/part.o 00:04:38.735 CC lib/bdev/scsi_nvme.o 00:04:39.001 CC lib/event/reactor.o 00:04:39.001 LIB libspdk_vfu_tgt.a 00:04:39.001 SO libspdk_vfu_tgt.so.3.0 00:04:39.001 CC lib/event/log_rpc.o 00:04:39.001 CC lib/fsdev/fsdev_rpc.o 00:04:39.001 SYMLINK libspdk_vfu_tgt.so 00:04:39.001 CC lib/event/app_rpc.o 00:04:39.263 CC lib/event/scheduler_static.o 00:04:39.520 LIB libspdk_fsdev.a 00:04:39.520 LIB libspdk_event.a 00:04:39.520 SO libspdk_fsdev.so.2.0 00:04:39.520 SO libspdk_event.so.14.0 00:04:39.520 SYMLINK libspdk_fsdev.so 00:04:39.778 SYMLINK libspdk_event.so 00:04:39.778 LIB libspdk_nvme.a 00:04:40.036 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:40.037 SO libspdk_nvme.so.15.0 00:04:40.296 SYMLINK libspdk_nvme.so 00:04:40.555 LIB libspdk_fuse_dispatcher.a 00:04:40.813 SO libspdk_fuse_dispatcher.so.1.0 00:04:40.813 SYMLINK libspdk_fuse_dispatcher.so 00:04:41.380 LIB libspdk_blob.a 00:04:41.380 SO libspdk_blob.so.11.0 00:04:41.637 SYMLINK libspdk_blob.so 00:04:41.903 LIB libspdk_bdev.a 00:04:41.903 SO libspdk_bdev.so.17.0 00:04:41.903 CC lib/blobfs/tree.o 00:04:41.903 CC lib/blobfs/blobfs.o 00:04:41.903 CC lib/lvol/lvol.o 00:04:41.903 SYMLINK libspdk_bdev.so 00:04:42.162 CC lib/scsi/dev.o 00:04:42.162 CC lib/ublk/ublk.o 00:04:42.162 CC lib/ftl/ftl_init.o 00:04:42.162 CC lib/ublk/ublk_rpc.o 00:04:42.162 CC lib/ftl/ftl_core.o 00:04:42.162 CC lib/scsi/lun.o 00:04:42.162 CC lib/nbd/nbd.o 00:04:42.162 CC lib/nvmf/ctrlr.o 00:04:42.423 CC lib/nbd/nbd_rpc.o 00:04:42.423 CC lib/ftl/ftl_layout.o 00:04:42.423 CC lib/ftl/ftl_debug.o 00:04:42.681 CC lib/scsi/port.o 00:04:42.681 CC lib/scsi/scsi.o 00:04:42.681 CC lib/scsi/scsi_bdev.o 00:04:42.681 LIB libspdk_nbd.a 00:04:42.681 CC lib/nvmf/ctrlr_discovery.o 00:04:42.681 CC lib/nvmf/ctrlr_bdev.o 00:04:42.681 SO libspdk_nbd.so.7.0 00:04:42.681 CC lib/ftl/ftl_io.o 00:04:42.681 SYMLINK libspdk_nbd.so 00:04:42.681 CC lib/ftl/ftl_l2p.o 00:04:42.681 CC lib/ftl/ftl_sb.o 00:04:42.940 LIB libspdk_blobfs.a 00:04:42.940 SO libspdk_blobfs.so.10.0 00:04:42.940 LIB libspdk_ublk.a 00:04:42.940 SO libspdk_ublk.so.3.0 00:04:42.940 SYMLINK libspdk_blobfs.so 00:04:42.940 CC lib/ftl/ftl_l2p_flat.o 00:04:42.940 CC lib/ftl/ftl_nv_cache.o 00:04:42.940 CC lib/nvmf/subsystem.o 00:04:42.940 LIB libspdk_lvol.a 00:04:42.940 SYMLINK libspdk_ublk.so 00:04:42.940 CC lib/scsi/scsi_pr.o 00:04:42.940 CC lib/ftl/ftl_band.o 00:04:43.198 SO libspdk_lvol.so.10.0 00:04:43.198 SYMLINK libspdk_lvol.so 00:04:43.198 CC lib/nvmf/nvmf.o 00:04:43.198 CC lib/ftl/ftl_band_ops.o 00:04:43.198 CC lib/ftl/ftl_writer.o 00:04:43.198 CC lib/ftl/ftl_rq.o 00:04:43.456 CC lib/scsi/scsi_rpc.o 00:04:43.456 CC lib/scsi/task.o 00:04:43.456 CC lib/ftl/ftl_reloc.o 00:04:43.456 CC lib/nvmf/nvmf_rpc.o 00:04:43.456 CC lib/nvmf/transport.o 00:04:43.456 CC lib/ftl/ftl_l2p_cache.o 00:04:43.456 CC lib/ftl/ftl_p2l.o 00:04:43.714 LIB libspdk_scsi.a 00:04:43.714 SO libspdk_scsi.so.9.0 00:04:43.714 SYMLINK libspdk_scsi.so 00:04:43.714 CC lib/ftl/ftl_p2l_log.o 00:04:43.973 CC lib/nvmf/tcp.o 00:04:43.973 CC lib/ftl/mngt/ftl_mngt.o 00:04:44.229 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:44.229 CC lib/nvmf/stubs.o 00:04:44.229 CC lib/vhost/vhost.o 00:04:44.229 CC lib/iscsi/conn.o 00:04:44.486 CC lib/iscsi/init_grp.o 00:04:44.486 CC lib/iscsi/iscsi.o 00:04:44.486 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:44.486 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:44.486 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:44.486 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:44.486 CC lib/iscsi/param.o 00:04:44.743 CC lib/iscsi/portal_grp.o 00:04:44.743 CC lib/iscsi/tgt_node.o 00:04:44.743 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:45.004 CC lib/iscsi/iscsi_subsystem.o 00:04:45.004 CC lib/iscsi/iscsi_rpc.o 00:04:45.004 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:45.004 CC lib/vhost/vhost_rpc.o 00:04:45.004 CC lib/iscsi/task.o 00:04:45.004 CC lib/vhost/vhost_scsi.o 00:04:45.004 CC lib/vhost/vhost_blk.o 00:04:45.271 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:45.271 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:45.271 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:45.529 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:45.529 CC lib/nvmf/mdns_server.o 00:04:45.529 CC lib/nvmf/vfio_user.o 00:04:45.529 CC lib/nvmf/rdma.o 00:04:45.529 CC lib/nvmf/auth.o 00:04:45.787 CC lib/vhost/rte_vhost_user.o 00:04:45.787 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:46.045 CC lib/ftl/utils/ftl_conf.o 00:04:46.045 CC lib/ftl/utils/ftl_md.o 00:04:46.045 CC lib/ftl/utils/ftl_mempool.o 00:04:46.045 CC lib/ftl/utils/ftl_bitmap.o 00:04:46.045 CC lib/ftl/utils/ftl_property.o 00:04:46.045 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:46.045 LIB libspdk_iscsi.a 00:04:46.303 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:46.303 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:46.303 SO libspdk_iscsi.so.8.0 00:04:46.303 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:46.303 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:46.560 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:46.560 SYMLINK libspdk_iscsi.so 00:04:46.560 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:46.560 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:46.560 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:46.560 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:46.560 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:46.560 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:46.560 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:46.818 CC lib/ftl/base/ftl_base_dev.o 00:04:46.818 CC lib/ftl/base/ftl_base_bdev.o 00:04:46.818 CC lib/ftl/ftl_trace.o 00:04:46.818 LIB libspdk_vhost.a 00:04:46.818 SO libspdk_vhost.so.8.0 00:04:46.818 SYMLINK libspdk_vhost.so 00:04:47.076 LIB libspdk_ftl.a 00:04:47.333 SO libspdk_ftl.so.9.0 00:04:47.591 SYMLINK libspdk_ftl.so 00:04:48.200 LIB libspdk_nvmf.a 00:04:48.458 SO libspdk_nvmf.so.20.0 00:04:48.716 SYMLINK libspdk_nvmf.so 00:04:48.974 CC module/env_dpdk/env_dpdk_rpc.o 00:04:48.974 CC module/vfu_device/vfu_virtio.o 00:04:49.231 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:49.231 CC module/fsdev/aio/fsdev_aio.o 00:04:49.231 CC module/sock/posix/posix.o 00:04:49.231 CC module/accel/ioat/accel_ioat.o 00:04:49.231 CC module/blob/bdev/blob_bdev.o 00:04:49.231 CC module/accel/error/accel_error.o 00:04:49.231 CC module/keyring/file/keyring.o 00:04:49.231 CC module/sock/uring/uring.o 00:04:49.231 LIB libspdk_env_dpdk_rpc.a 00:04:49.231 SO libspdk_env_dpdk_rpc.so.6.0 00:04:49.231 SYMLINK libspdk_env_dpdk_rpc.so 00:04:49.231 CC module/accel/error/accel_error_rpc.o 00:04:49.231 CC module/keyring/file/keyring_rpc.o 00:04:49.231 CC module/accel/ioat/accel_ioat_rpc.o 00:04:49.488 LIB libspdk_scheduler_dynamic.a 00:04:49.488 SO libspdk_scheduler_dynamic.so.4.0 00:04:49.488 LIB libspdk_accel_error.a 00:04:49.488 SYMLINK libspdk_scheduler_dynamic.so 00:04:49.488 CC module/vfu_device/vfu_virtio_blk.o 00:04:49.488 SO libspdk_accel_error.so.2.0 00:04:49.488 LIB libspdk_blob_bdev.a 00:04:49.488 LIB libspdk_keyring_file.a 00:04:49.488 LIB libspdk_accel_ioat.a 00:04:49.488 SO libspdk_blob_bdev.so.11.0 00:04:49.488 SO libspdk_keyring_file.so.2.0 00:04:49.488 SO libspdk_accel_ioat.so.6.0 00:04:49.488 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:49.488 SYMLINK libspdk_accel_error.so 00:04:49.488 CC module/vfu_device/vfu_virtio_scsi.o 00:04:49.745 SYMLINK libspdk_accel_ioat.so 00:04:49.745 SYMLINK libspdk_blob_bdev.so 00:04:49.745 SYMLINK libspdk_keyring_file.so 00:04:49.745 LIB libspdk_scheduler_dpdk_governor.a 00:04:49.745 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:49.745 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:49.745 CC module/accel/iaa/accel_iaa.o 00:04:49.745 CC module/accel/dsa/accel_dsa.o 00:04:49.745 CC module/keyring/linux/keyring.o 00:04:49.745 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:49.745 CC module/keyring/linux/keyring_rpc.o 00:04:50.004 CC module/accel/dsa/accel_dsa_rpc.o 00:04:50.004 CC module/vfu_device/vfu_virtio_rpc.o 00:04:50.004 CC module/fsdev/aio/linux_aio_mgr.o 00:04:50.004 LIB libspdk_keyring_linux.a 00:04:50.004 CC module/accel/iaa/accel_iaa_rpc.o 00:04:50.004 CC module/scheduler/gscheduler/gscheduler.o 00:04:50.004 LIB libspdk_sock_uring.a 00:04:50.004 SO libspdk_keyring_linux.so.1.0 00:04:50.004 LIB libspdk_sock_posix.a 00:04:50.004 SO libspdk_sock_uring.so.5.0 00:04:50.004 SO libspdk_sock_posix.so.6.0 00:04:50.261 CC module/vfu_device/vfu_virtio_fs.o 00:04:50.261 LIB libspdk_accel_dsa.a 00:04:50.261 SYMLINK libspdk_keyring_linux.so 00:04:50.261 SYMLINK libspdk_sock_uring.so 00:04:50.261 SO libspdk_accel_dsa.so.5.0 00:04:50.261 SYMLINK libspdk_sock_posix.so 00:04:50.261 LIB libspdk_scheduler_gscheduler.a 00:04:50.261 LIB libspdk_accel_iaa.a 00:04:50.261 LIB libspdk_fsdev_aio.a 00:04:50.261 SO libspdk_scheduler_gscheduler.so.4.0 00:04:50.261 SO libspdk_accel_iaa.so.3.0 00:04:50.261 CC module/bdev/delay/vbdev_delay.o 00:04:50.262 SO libspdk_fsdev_aio.so.1.0 00:04:50.262 SYMLINK libspdk_accel_dsa.so 00:04:50.262 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:50.262 SYMLINK libspdk_scheduler_gscheduler.so 00:04:50.262 SYMLINK libspdk_accel_iaa.so 00:04:50.519 SYMLINK libspdk_fsdev_aio.so 00:04:50.519 LIB libspdk_vfu_device.a 00:04:50.519 CC module/bdev/gpt/gpt.o 00:04:50.519 CC module/bdev/error/vbdev_error.o 00:04:50.519 CC module/bdev/lvol/vbdev_lvol.o 00:04:50.519 CC module/blobfs/bdev/blobfs_bdev.o 00:04:50.519 SO libspdk_vfu_device.so.3.0 00:04:50.519 CC module/bdev/malloc/bdev_malloc.o 00:04:50.519 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:50.519 CC module/bdev/null/bdev_null.o 00:04:50.519 CC module/bdev/nvme/bdev_nvme.o 00:04:50.519 SYMLINK libspdk_vfu_device.so 00:04:50.519 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:50.776 CC module/bdev/gpt/vbdev_gpt.o 00:04:50.776 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:50.776 LIB libspdk_bdev_delay.a 00:04:50.776 CC module/bdev/error/vbdev_error_rpc.o 00:04:50.776 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:50.776 SO libspdk_bdev_delay.so.6.0 00:04:50.776 CC module/bdev/null/bdev_null_rpc.o 00:04:51.034 SYMLINK libspdk_bdev_delay.so 00:04:51.034 LIB libspdk_blobfs_bdev.a 00:04:51.034 LIB libspdk_bdev_error.a 00:04:51.034 SO libspdk_blobfs_bdev.so.6.0 00:04:51.034 SO libspdk_bdev_error.so.6.0 00:04:51.034 LIB libspdk_bdev_malloc.a 00:04:51.034 LIB libspdk_bdev_gpt.a 00:04:51.034 SYMLINK libspdk_blobfs_bdev.so 00:04:51.034 SO libspdk_bdev_malloc.so.6.0 00:04:51.034 CC module/bdev/nvme/nvme_rpc.o 00:04:51.034 LIB libspdk_bdev_null.a 00:04:51.034 SYMLINK libspdk_bdev_error.so 00:04:51.034 CC module/bdev/nvme/bdev_mdns_client.o 00:04:51.034 CC module/bdev/nvme/vbdev_opal.o 00:04:51.034 SO libspdk_bdev_gpt.so.6.0 00:04:51.034 SO libspdk_bdev_null.so.6.0 00:04:51.034 LIB libspdk_bdev_lvol.a 00:04:51.034 CC module/bdev/passthru/vbdev_passthru.o 00:04:51.034 SYMLINK libspdk_bdev_malloc.so 00:04:51.292 SO libspdk_bdev_lvol.so.6.0 00:04:51.292 SYMLINK libspdk_bdev_gpt.so 00:04:51.292 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:51.292 SYMLINK libspdk_bdev_null.so 00:04:51.292 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:51.292 SYMLINK libspdk_bdev_lvol.so 00:04:51.292 CC module/bdev/raid/bdev_raid.o 00:04:51.550 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:51.550 CC module/bdev/raid/bdev_raid_rpc.o 00:04:51.550 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:51.550 CC module/bdev/split/vbdev_split.o 00:04:51.550 CC module/bdev/split/vbdev_split_rpc.o 00:04:51.550 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:51.550 CC module/bdev/uring/bdev_uring.o 00:04:51.550 LIB libspdk_bdev_passthru.a 00:04:51.550 CC module/bdev/aio/bdev_aio.o 00:04:51.550 SO libspdk_bdev_passthru.so.6.0 00:04:51.808 CC module/bdev/uring/bdev_uring_rpc.o 00:04:51.808 SYMLINK libspdk_bdev_passthru.so 00:04:51.808 CC module/bdev/raid/bdev_raid_sb.o 00:04:51.808 CC module/bdev/raid/raid0.o 00:04:51.808 LIB libspdk_bdev_split.a 00:04:51.808 SO libspdk_bdev_split.so.6.0 00:04:51.808 SYMLINK libspdk_bdev_split.so 00:04:51.808 CC module/bdev/raid/raid1.o 00:04:51.808 CC module/bdev/ftl/bdev_ftl.o 00:04:51.808 CC module/bdev/aio/bdev_aio_rpc.o 00:04:51.808 LIB libspdk_bdev_zone_block.a 00:04:52.067 SO libspdk_bdev_zone_block.so.6.0 00:04:52.067 LIB libspdk_bdev_uring.a 00:04:52.067 CC module/bdev/raid/concat.o 00:04:52.067 SYMLINK libspdk_bdev_zone_block.so 00:04:52.067 SO libspdk_bdev_uring.so.6.0 00:04:52.067 LIB libspdk_bdev_aio.a 00:04:52.067 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:52.067 SO libspdk_bdev_aio.so.6.0 00:04:52.067 SYMLINK libspdk_bdev_uring.so 00:04:52.326 SYMLINK libspdk_bdev_aio.so 00:04:52.326 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:52.326 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:52.326 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:52.326 CC module/bdev/iscsi/bdev_iscsi.o 00:04:52.326 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:52.326 LIB libspdk_bdev_ftl.a 00:04:52.583 SO libspdk_bdev_ftl.so.6.0 00:04:52.583 SYMLINK libspdk_bdev_ftl.so 00:04:52.842 LIB libspdk_bdev_raid.a 00:04:52.842 SO libspdk_bdev_raid.so.6.0 00:04:52.842 LIB libspdk_bdev_iscsi.a 00:04:52.842 SO libspdk_bdev_iscsi.so.6.0 00:04:53.100 SYMLINK libspdk_bdev_raid.so 00:04:53.100 LIB libspdk_bdev_virtio.a 00:04:53.100 SYMLINK libspdk_bdev_iscsi.so 00:04:53.100 SO libspdk_bdev_virtio.so.6.0 00:04:53.100 SYMLINK libspdk_bdev_virtio.so 00:04:54.032 LIB libspdk_bdev_nvme.a 00:04:54.032 SO libspdk_bdev_nvme.so.7.1 00:04:54.032 SYMLINK libspdk_bdev_nvme.so 00:04:54.967 CC module/event/subsystems/vmd/vmd.o 00:04:54.967 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:54.967 CC module/event/subsystems/fsdev/fsdev.o 00:04:54.967 CC module/event/subsystems/sock/sock.o 00:04:54.967 CC module/event/subsystems/scheduler/scheduler.o 00:04:54.967 CC module/event/subsystems/keyring/keyring.o 00:04:54.967 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:04:54.967 CC module/event/subsystems/iobuf/iobuf.o 00:04:54.967 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:54.967 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:54.967 LIB libspdk_event_keyring.a 00:04:54.967 LIB libspdk_event_sock.a 00:04:54.967 LIB libspdk_event_fsdev.a 00:04:54.967 LIB libspdk_event_vfu_tgt.a 00:04:54.967 LIB libspdk_event_vmd.a 00:04:54.967 LIB libspdk_event_scheduler.a 00:04:54.967 LIB libspdk_event_vhost_blk.a 00:04:54.967 SO libspdk_event_sock.so.5.0 00:04:54.967 SO libspdk_event_keyring.so.1.0 00:04:54.967 SO libspdk_event_fsdev.so.1.0 00:04:54.967 SO libspdk_event_vfu_tgt.so.3.0 00:04:54.967 SO libspdk_event_scheduler.so.4.0 00:04:54.967 SO libspdk_event_vmd.so.6.0 00:04:54.967 LIB libspdk_event_iobuf.a 00:04:54.967 SO libspdk_event_vhost_blk.so.3.0 00:04:54.967 SYMLINK libspdk_event_sock.so 00:04:54.967 SO libspdk_event_iobuf.so.3.0 00:04:54.967 SYMLINK libspdk_event_keyring.so 00:04:54.967 SYMLINK libspdk_event_fsdev.so 00:04:54.967 SYMLINK libspdk_event_scheduler.so 00:04:54.967 SYMLINK libspdk_event_vfu_tgt.so 00:04:54.967 SYMLINK libspdk_event_vhost_blk.so 00:04:54.967 SYMLINK libspdk_event_vmd.so 00:04:54.967 SYMLINK libspdk_event_iobuf.so 00:04:55.533 CC module/event/subsystems/accel/accel.o 00:04:55.791 LIB libspdk_event_accel.a 00:04:55.791 SO libspdk_event_accel.so.6.0 00:04:55.791 SYMLINK libspdk_event_accel.so 00:04:56.356 CC module/event/subsystems/bdev/bdev.o 00:04:56.356 LIB libspdk_event_bdev.a 00:04:56.612 SO libspdk_event_bdev.so.6.0 00:04:56.612 SYMLINK libspdk_event_bdev.so 00:04:56.869 CC module/event/subsystems/nbd/nbd.o 00:04:56.869 CC module/event/subsystems/ublk/ublk.o 00:04:56.869 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:56.869 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:56.869 CC module/event/subsystems/scsi/scsi.o 00:04:57.125 LIB libspdk_event_nbd.a 00:04:57.125 LIB libspdk_event_ublk.a 00:04:57.125 SO libspdk_event_nbd.so.6.0 00:04:57.125 SO libspdk_event_ublk.so.3.0 00:04:57.125 SYMLINK libspdk_event_nbd.so 00:04:57.125 LIB libspdk_event_scsi.a 00:04:57.125 SO libspdk_event_scsi.so.6.0 00:04:57.125 LIB libspdk_event_nvmf.a 00:04:57.125 SYMLINK libspdk_event_ublk.so 00:04:57.382 SYMLINK libspdk_event_scsi.so 00:04:57.382 SO libspdk_event_nvmf.so.6.0 00:04:57.382 SYMLINK libspdk_event_nvmf.so 00:04:57.639 CC module/event/subsystems/iscsi/iscsi.o 00:04:57.639 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:57.896 LIB libspdk_event_iscsi.a 00:04:57.896 LIB libspdk_event_vhost_scsi.a 00:04:57.896 SO libspdk_event_iscsi.so.6.0 00:04:57.896 SO libspdk_event_vhost_scsi.so.3.0 00:04:57.896 SYMLINK libspdk_event_iscsi.so 00:04:57.896 SYMLINK libspdk_event_vhost_scsi.so 00:04:58.153 SO libspdk.so.6.0 00:04:58.154 SYMLINK libspdk.so 00:04:58.720 CXX app/trace/trace.o 00:04:58.720 CC app/trace_record/trace_record.o 00:04:58.720 CC app/spdk_lspci/spdk_lspci.o 00:04:58.720 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:58.720 CC app/iscsi_tgt/iscsi_tgt.o 00:04:58.720 CC app/spdk_tgt/spdk_tgt.o 00:04:58.720 CC app/nvmf_tgt/nvmf_main.o 00:04:58.720 CC examples/ioat/perf/perf.o 00:04:58.720 CC test/thread/poller_perf/poller_perf.o 00:04:58.720 CC examples/util/zipf/zipf.o 00:04:58.720 LINK spdk_lspci 00:04:58.720 LINK interrupt_tgt 00:04:58.978 LINK nvmf_tgt 00:04:58.978 LINK spdk_tgt 00:04:58.978 LINK zipf 00:04:58.978 LINK poller_perf 00:04:58.978 LINK iscsi_tgt 00:04:58.978 LINK spdk_trace_record 00:04:58.978 LINK ioat_perf 00:04:58.978 LINK spdk_trace 00:04:58.978 CC app/spdk_nvme_perf/perf.o 00:04:59.237 CC examples/ioat/verify/verify.o 00:04:59.237 CC app/spdk_nvme_identify/identify.o 00:04:59.237 CC app/spdk_nvme_discover/discovery_aer.o 00:04:59.237 CC app/spdk_top/spdk_top.o 00:04:59.237 CC app/spdk_dd/spdk_dd.o 00:04:59.237 CC test/dma/test_dma/test_dma.o 00:04:59.237 CC app/fio/nvme/fio_plugin.o 00:04:59.495 CC examples/thread/thread/thread_ex.o 00:04:59.495 LINK verify 00:04:59.495 LINK spdk_nvme_discover 00:04:59.495 CC test/app/bdev_svc/bdev_svc.o 00:04:59.753 LINK thread 00:04:59.753 TEST_HEADER include/spdk/accel.h 00:04:59.753 TEST_HEADER include/spdk/accel_module.h 00:04:59.753 TEST_HEADER include/spdk/assert.h 00:04:59.753 TEST_HEADER include/spdk/barrier.h 00:04:59.753 TEST_HEADER include/spdk/base64.h 00:04:59.753 TEST_HEADER include/spdk/bdev.h 00:04:59.753 TEST_HEADER include/spdk/bdev_module.h 00:04:59.753 TEST_HEADER include/spdk/bdev_zone.h 00:04:59.753 TEST_HEADER include/spdk/bit_array.h 00:04:59.753 LINK bdev_svc 00:04:59.753 TEST_HEADER include/spdk/bit_pool.h 00:04:59.753 TEST_HEADER include/spdk/blob_bdev.h 00:04:59.753 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:59.753 TEST_HEADER include/spdk/blobfs.h 00:04:59.753 TEST_HEADER include/spdk/blob.h 00:04:59.753 TEST_HEADER include/spdk/conf.h 00:04:59.753 TEST_HEADER include/spdk/config.h 00:04:59.753 TEST_HEADER include/spdk/cpuset.h 00:04:59.753 TEST_HEADER include/spdk/crc16.h 00:04:59.753 TEST_HEADER include/spdk/crc32.h 00:04:59.753 TEST_HEADER include/spdk/crc64.h 00:04:59.753 TEST_HEADER include/spdk/dif.h 00:04:59.753 TEST_HEADER include/spdk/dma.h 00:04:59.753 TEST_HEADER include/spdk/endian.h 00:04:59.753 TEST_HEADER include/spdk/env_dpdk.h 00:04:59.753 TEST_HEADER include/spdk/env.h 00:04:59.753 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:59.753 TEST_HEADER include/spdk/event.h 00:04:59.753 TEST_HEADER include/spdk/fd_group.h 00:04:59.753 TEST_HEADER include/spdk/fd.h 00:04:59.753 TEST_HEADER include/spdk/file.h 00:04:59.753 TEST_HEADER include/spdk/fsdev.h 00:04:59.753 LINK spdk_dd 00:04:59.753 TEST_HEADER include/spdk/fsdev_module.h 00:04:59.753 TEST_HEADER include/spdk/ftl.h 00:04:59.753 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:59.753 TEST_HEADER include/spdk/gpt_spec.h 00:04:59.753 TEST_HEADER include/spdk/hexlify.h 00:04:59.753 TEST_HEADER include/spdk/histogram_data.h 00:04:59.753 TEST_HEADER include/spdk/idxd.h 00:04:59.753 TEST_HEADER include/spdk/idxd_spec.h 00:04:59.753 TEST_HEADER include/spdk/init.h 00:04:59.753 TEST_HEADER include/spdk/ioat.h 00:04:59.753 TEST_HEADER include/spdk/ioat_spec.h 00:04:59.753 TEST_HEADER include/spdk/iscsi_spec.h 00:04:59.753 TEST_HEADER include/spdk/json.h 00:04:59.753 TEST_HEADER include/spdk/jsonrpc.h 00:04:59.753 TEST_HEADER include/spdk/keyring.h 00:04:59.753 TEST_HEADER include/spdk/keyring_module.h 00:04:59.753 TEST_HEADER include/spdk/likely.h 00:04:59.753 TEST_HEADER include/spdk/log.h 00:04:59.753 TEST_HEADER include/spdk/lvol.h 00:04:59.753 TEST_HEADER include/spdk/md5.h 00:04:59.753 TEST_HEADER include/spdk/memory.h 00:04:59.753 TEST_HEADER include/spdk/mmio.h 00:04:59.753 TEST_HEADER include/spdk/nbd.h 00:05:00.012 TEST_HEADER include/spdk/net.h 00:05:00.012 TEST_HEADER include/spdk/notify.h 00:05:00.012 TEST_HEADER include/spdk/nvme.h 00:05:00.012 TEST_HEADER include/spdk/nvme_intel.h 00:05:00.012 TEST_HEADER include/spdk/nvme_ocssd.h 00:05:00.012 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:05:00.012 TEST_HEADER include/spdk/nvme_spec.h 00:05:00.012 TEST_HEADER include/spdk/nvme_zns.h 00:05:00.012 TEST_HEADER include/spdk/nvmf_cmd.h 00:05:00.012 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:05:00.012 TEST_HEADER include/spdk/nvmf.h 00:05:00.012 TEST_HEADER include/spdk/nvmf_spec.h 00:05:00.012 TEST_HEADER include/spdk/nvmf_transport.h 00:05:00.012 TEST_HEADER include/spdk/opal.h 00:05:00.012 TEST_HEADER include/spdk/opal_spec.h 00:05:00.012 TEST_HEADER include/spdk/pci_ids.h 00:05:00.012 TEST_HEADER include/spdk/pipe.h 00:05:00.012 LINK test_dma 00:05:00.012 TEST_HEADER include/spdk/queue.h 00:05:00.012 TEST_HEADER include/spdk/reduce.h 00:05:00.012 TEST_HEADER include/spdk/rpc.h 00:05:00.012 TEST_HEADER include/spdk/scheduler.h 00:05:00.012 TEST_HEADER include/spdk/scsi.h 00:05:00.012 TEST_HEADER include/spdk/scsi_spec.h 00:05:00.012 TEST_HEADER include/spdk/sock.h 00:05:00.012 TEST_HEADER include/spdk/stdinc.h 00:05:00.012 TEST_HEADER include/spdk/string.h 00:05:00.012 TEST_HEADER include/spdk/thread.h 00:05:00.012 TEST_HEADER include/spdk/trace.h 00:05:00.012 TEST_HEADER include/spdk/trace_parser.h 00:05:00.012 TEST_HEADER include/spdk/tree.h 00:05:00.012 TEST_HEADER include/spdk/ublk.h 00:05:00.012 TEST_HEADER include/spdk/util.h 00:05:00.012 TEST_HEADER include/spdk/uuid.h 00:05:00.012 TEST_HEADER include/spdk/version.h 00:05:00.012 TEST_HEADER include/spdk/vfio_user_pci.h 00:05:00.012 TEST_HEADER include/spdk/vfio_user_spec.h 00:05:00.012 TEST_HEADER include/spdk/vhost.h 00:05:00.012 TEST_HEADER include/spdk/vmd.h 00:05:00.012 TEST_HEADER include/spdk/xor.h 00:05:00.012 TEST_HEADER include/spdk/zipf.h 00:05:00.012 CXX test/cpp_headers/accel.o 00:05:00.012 LINK spdk_nvme 00:05:00.012 CC examples/sock/hello_world/hello_sock.o 00:05:00.270 CXX test/cpp_headers/accel_module.o 00:05:00.271 LINK spdk_nvme_perf 00:05:00.271 CC examples/vmd/lsvmd/lsvmd.o 00:05:00.271 CC examples/idxd/perf/perf.o 00:05:00.271 LINK spdk_top 00:05:00.271 LINK nvme_fuzz 00:05:00.271 CC app/fio/bdev/fio_plugin.o 00:05:00.271 LINK spdk_nvme_identify 00:05:00.271 CXX test/cpp_headers/assert.o 00:05:00.271 CC app/vhost/vhost.o 00:05:00.271 LINK lsvmd 00:05:00.533 LINK hello_sock 00:05:00.533 CXX test/cpp_headers/barrier.o 00:05:00.533 CC examples/vmd/led/led.o 00:05:00.533 LINK vhost 00:05:00.533 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:05:00.533 CC test/app/histogram_perf/histogram_perf.o 00:05:00.792 CC examples/fsdev/hello_world/hello_fsdev.o 00:05:00.792 LINK idxd_perf 00:05:00.792 CXX test/cpp_headers/base64.o 00:05:00.792 LINK led 00:05:00.792 CC examples/accel/perf/accel_perf.o 00:05:00.792 CC examples/blob/hello_world/hello_blob.o 00:05:00.792 LINK histogram_perf 00:05:00.792 LINK spdk_bdev 00:05:01.049 CXX test/cpp_headers/bdev.o 00:05:01.049 CC examples/blob/cli/blobcli.o 00:05:01.049 LINK hello_fsdev 00:05:01.049 LINK hello_blob 00:05:01.049 CC examples/nvme/hello_world/hello_world.o 00:05:01.049 CC test/env/vtophys/vtophys.o 00:05:01.049 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:05:01.049 CXX test/cpp_headers/bdev_module.o 00:05:01.307 CC test/env/mem_callbacks/mem_callbacks.o 00:05:01.307 CXX test/cpp_headers/bdev_zone.o 00:05:01.307 LINK vtophys 00:05:01.307 LINK env_dpdk_post_init 00:05:01.307 LINK hello_world 00:05:01.307 LINK accel_perf 00:05:01.307 CC test/env/memory/memory_ut.o 00:05:01.565 CC test/env/pci/pci_ut.o 00:05:01.565 CXX test/cpp_headers/bit_array.o 00:05:01.565 LINK blobcli 00:05:01.566 CC examples/nvme/reconnect/reconnect.o 00:05:01.566 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:01.566 CC examples/nvme/arbitration/arbitration.o 00:05:01.566 CC examples/nvme/hotplug/hotplug.o 00:05:01.824 CXX test/cpp_headers/bit_pool.o 00:05:01.824 LINK mem_callbacks 00:05:01.824 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:01.824 CXX test/cpp_headers/blob_bdev.o 00:05:01.824 LINK hotplug 00:05:01.824 LINK pci_ut 00:05:01.824 CXX test/cpp_headers/blobfs_bdev.o 00:05:02.080 LINK arbitration 00:05:02.080 LINK reconnect 00:05:02.080 LINK cmb_copy 00:05:02.080 CXX test/cpp_headers/blobfs.o 00:05:02.080 LINK nvme_manage 00:05:02.080 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:05:02.336 CC test/app/jsoncat/jsoncat.o 00:05:02.336 CC test/app/stub/stub.o 00:05:02.336 CC examples/bdev/hello_world/hello_bdev.o 00:05:02.336 CXX test/cpp_headers/blob.o 00:05:02.336 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:05:02.336 CC examples/nvme/abort/abort.o 00:05:02.336 CC examples/bdev/bdevperf/bdevperf.o 00:05:02.336 LINK jsoncat 00:05:02.594 LINK stub 00:05:02.594 CXX test/cpp_headers/conf.o 00:05:02.594 CC test/event/event_perf/event_perf.o 00:05:02.594 LINK hello_bdev 00:05:02.594 LINK iscsi_fuzz 00:05:02.594 LINK memory_ut 00:05:02.594 CC test/event/reactor/reactor.o 00:05:02.594 CXX test/cpp_headers/config.o 00:05:02.852 CXX test/cpp_headers/cpuset.o 00:05:02.852 LINK event_perf 00:05:02.852 CC test/event/reactor_perf/reactor_perf.o 00:05:02.852 LINK abort 00:05:02.852 LINK reactor 00:05:02.852 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:02.852 CXX test/cpp_headers/crc16.o 00:05:02.852 LINK vhost_fuzz 00:05:02.852 LINK reactor_perf 00:05:03.110 CC test/event/app_repeat/app_repeat.o 00:05:03.110 CC test/event/scheduler/scheduler.o 00:05:03.110 CXX test/cpp_headers/crc32.o 00:05:03.110 LINK pmr_persistence 00:05:03.110 CC test/nvme/aer/aer.o 00:05:03.110 CC test/nvme/reset/reset.o 00:05:03.110 LINK app_repeat 00:05:03.110 CC test/nvme/sgl/sgl.o 00:05:03.110 CC test/rpc_client/rpc_client_test.o 00:05:03.110 CXX test/cpp_headers/crc64.o 00:05:03.369 CC test/nvme/e2edp/nvme_dp.o 00:05:03.369 LINK scheduler 00:05:03.369 CXX test/cpp_headers/dif.o 00:05:03.369 LINK bdevperf 00:05:03.369 CXX test/cpp_headers/dma.o 00:05:03.369 LINK reset 00:05:03.369 LINK aer 00:05:03.369 LINK rpc_client_test 00:05:03.369 CXX test/cpp_headers/endian.o 00:05:03.627 CXX test/cpp_headers/env_dpdk.o 00:05:03.627 LINK nvme_dp 00:05:03.627 LINK sgl 00:05:03.627 CC test/nvme/overhead/overhead.o 00:05:03.627 CC test/accel/dif/dif.o 00:05:03.627 CC test/nvme/err_injection/err_injection.o 00:05:03.627 CXX test/cpp_headers/env.o 00:05:03.627 CC test/nvme/startup/startup.o 00:05:03.627 CC examples/nvmf/nvmf/nvmf.o 00:05:03.885 CC test/nvme/reserve/reserve.o 00:05:03.885 CC test/blobfs/mkfs/mkfs.o 00:05:03.885 CC test/nvme/simple_copy/simple_copy.o 00:05:03.885 CXX test/cpp_headers/event.o 00:05:03.885 LINK err_injection 00:05:03.885 CC test/lvol/esnap/esnap.o 00:05:03.885 LINK startup 00:05:03.885 LINK overhead 00:05:03.885 LINK reserve 00:05:04.143 LINK mkfs 00:05:04.143 CXX test/cpp_headers/fd_group.o 00:05:04.143 LINK nvmf 00:05:04.143 LINK simple_copy 00:05:04.143 CC test/nvme/connect_stress/connect_stress.o 00:05:04.143 CC test/nvme/boot_partition/boot_partition.o 00:05:04.143 CXX test/cpp_headers/fd.o 00:05:04.419 CC test/nvme/compliance/nvme_compliance.o 00:05:04.419 CC test/nvme/fused_ordering/fused_ordering.o 00:05:04.419 CXX test/cpp_headers/file.o 00:05:04.419 CXX test/cpp_headers/fsdev.o 00:05:04.419 LINK boot_partition 00:05:04.419 LINK connect_stress 00:05:04.419 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:04.419 LINK dif 00:05:04.419 CXX test/cpp_headers/fsdev_module.o 00:05:04.419 CC test/nvme/fdp/fdp.o 00:05:04.419 CXX test/cpp_headers/ftl.o 00:05:04.677 LINK fused_ordering 00:05:04.677 CXX test/cpp_headers/fuse_dispatcher.o 00:05:04.677 CC test/nvme/cuse/cuse.o 00:05:04.677 LINK nvme_compliance 00:05:04.677 CXX test/cpp_headers/gpt_spec.o 00:05:04.677 LINK doorbell_aers 00:05:04.677 CXX test/cpp_headers/hexlify.o 00:05:04.677 CXX test/cpp_headers/histogram_data.o 00:05:04.677 CXX test/cpp_headers/idxd.o 00:05:04.934 CXX test/cpp_headers/idxd_spec.o 00:05:04.934 CXX test/cpp_headers/init.o 00:05:04.934 LINK fdp 00:05:04.934 CXX test/cpp_headers/ioat.o 00:05:04.934 CXX test/cpp_headers/ioat_spec.o 00:05:04.934 CXX test/cpp_headers/iscsi_spec.o 00:05:04.934 CXX test/cpp_headers/json.o 00:05:05.191 CC test/bdev/bdevio/bdevio.o 00:05:05.191 CXX test/cpp_headers/jsonrpc.o 00:05:05.191 CXX test/cpp_headers/keyring.o 00:05:05.191 CXX test/cpp_headers/keyring_module.o 00:05:05.191 CXX test/cpp_headers/likely.o 00:05:05.191 CXX test/cpp_headers/log.o 00:05:05.191 CXX test/cpp_headers/lvol.o 00:05:05.191 CXX test/cpp_headers/md5.o 00:05:05.191 CXX test/cpp_headers/memory.o 00:05:05.449 CXX test/cpp_headers/mmio.o 00:05:05.449 CXX test/cpp_headers/nbd.o 00:05:05.449 CXX test/cpp_headers/net.o 00:05:05.449 CXX test/cpp_headers/notify.o 00:05:05.449 CXX test/cpp_headers/nvme.o 00:05:05.449 CXX test/cpp_headers/nvme_intel.o 00:05:05.449 CXX test/cpp_headers/nvme_ocssd.o 00:05:05.449 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:05.449 CXX test/cpp_headers/nvme_spec.o 00:05:05.449 CXX test/cpp_headers/nvme_zns.o 00:05:05.449 LINK bdevio 00:05:05.449 CXX test/cpp_headers/nvmf_cmd.o 00:05:05.706 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:05.706 CXX test/cpp_headers/nvmf.o 00:05:05.706 CXX test/cpp_headers/nvmf_spec.o 00:05:05.706 CXX test/cpp_headers/nvmf_transport.o 00:05:05.706 CXX test/cpp_headers/opal.o 00:05:05.707 CXX test/cpp_headers/opal_spec.o 00:05:05.707 CXX test/cpp_headers/pci_ids.o 00:05:05.707 CXX test/cpp_headers/pipe.o 00:05:05.707 CXX test/cpp_headers/queue.o 00:05:05.965 CXX test/cpp_headers/reduce.o 00:05:05.965 CXX test/cpp_headers/rpc.o 00:05:05.965 CXX test/cpp_headers/scheduler.o 00:05:05.965 CXX test/cpp_headers/scsi.o 00:05:05.965 CXX test/cpp_headers/scsi_spec.o 00:05:05.965 CXX test/cpp_headers/sock.o 00:05:05.965 CXX test/cpp_headers/stdinc.o 00:05:05.965 CXX test/cpp_headers/string.o 00:05:05.965 CXX test/cpp_headers/thread.o 00:05:06.223 CXX test/cpp_headers/trace.o 00:05:06.223 CXX test/cpp_headers/trace_parser.o 00:05:06.223 CXX test/cpp_headers/tree.o 00:05:06.223 CXX test/cpp_headers/ublk.o 00:05:06.223 CXX test/cpp_headers/util.o 00:05:06.223 CXX test/cpp_headers/uuid.o 00:05:06.223 CXX test/cpp_headers/version.o 00:05:06.223 CXX test/cpp_headers/vfio_user_pci.o 00:05:06.223 CXX test/cpp_headers/vfio_user_spec.o 00:05:06.223 CXX test/cpp_headers/vhost.o 00:05:06.223 CXX test/cpp_headers/vmd.o 00:05:06.223 CXX test/cpp_headers/xor.o 00:05:06.482 CXX test/cpp_headers/zipf.o 00:05:06.482 LINK cuse 00:05:10.784 LINK esnap 00:05:10.784 00:05:10.784 real 1m38.308s 00:05:10.784 user 8m25.002s 00:05:10.784 sys 2m10.942s 00:05:10.784 14:09:38 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:05:10.784 ************************************ 00:05:10.784 END TEST make 00:05:10.784 ************************************ 00:05:10.784 14:09:38 make -- common/autotest_common.sh@10 -- $ set +x 00:05:10.784 14:09:38 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:10.784 14:09:38 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:10.784 14:09:38 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:10.784 14:09:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:10.784 14:09:38 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:05:10.784 14:09:38 -- pm/common@44 -- $ pid=5251 00:05:10.784 14:09:38 -- pm/common@50 -- $ kill -TERM 5251 00:05:10.784 14:09:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:10.784 14:09:38 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:05:10.784 14:09:38 -- pm/common@44 -- $ pid=5252 00:05:10.784 14:09:38 -- pm/common@50 -- $ kill -TERM 5252 00:05:10.784 14:09:38 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:05:10.784 14:09:38 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:05:10.784 14:09:38 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:10.784 14:09:38 -- common/autotest_common.sh@1691 -- # lcov --version 00:05:10.784 14:09:38 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:11.044 14:09:38 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:11.044 14:09:38 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:11.044 14:09:38 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:11.044 14:09:38 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:11.044 14:09:38 -- scripts/common.sh@336 -- # IFS=.-: 00:05:11.044 14:09:38 -- scripts/common.sh@336 -- # read -ra ver1 00:05:11.044 14:09:38 -- scripts/common.sh@337 -- # IFS=.-: 00:05:11.044 14:09:38 -- scripts/common.sh@337 -- # read -ra ver2 00:05:11.044 14:09:38 -- scripts/common.sh@338 -- # local 'op=<' 00:05:11.044 14:09:38 -- scripts/common.sh@340 -- # ver1_l=2 00:05:11.044 14:09:38 -- scripts/common.sh@341 -- # ver2_l=1 00:05:11.044 14:09:38 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:11.044 14:09:38 -- scripts/common.sh@344 -- # case "$op" in 00:05:11.045 14:09:38 -- scripts/common.sh@345 -- # : 1 00:05:11.045 14:09:38 -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:11.045 14:09:38 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:11.045 14:09:38 -- scripts/common.sh@365 -- # decimal 1 00:05:11.045 14:09:38 -- scripts/common.sh@353 -- # local d=1 00:05:11.045 14:09:38 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:11.045 14:09:38 -- scripts/common.sh@355 -- # echo 1 00:05:11.045 14:09:38 -- scripts/common.sh@365 -- # ver1[v]=1 00:05:11.045 14:09:38 -- scripts/common.sh@366 -- # decimal 2 00:05:11.045 14:09:38 -- scripts/common.sh@353 -- # local d=2 00:05:11.045 14:09:38 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:11.045 14:09:38 -- scripts/common.sh@355 -- # echo 2 00:05:11.045 14:09:38 -- scripts/common.sh@366 -- # ver2[v]=2 00:05:11.045 14:09:38 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:11.045 14:09:38 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:11.045 14:09:38 -- scripts/common.sh@368 -- # return 0 00:05:11.045 14:09:38 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:11.045 14:09:38 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:11.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.045 --rc genhtml_branch_coverage=1 00:05:11.045 --rc genhtml_function_coverage=1 00:05:11.045 --rc genhtml_legend=1 00:05:11.045 --rc geninfo_all_blocks=1 00:05:11.045 --rc geninfo_unexecuted_blocks=1 00:05:11.045 00:05:11.045 ' 00:05:11.045 14:09:38 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:11.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.045 --rc genhtml_branch_coverage=1 00:05:11.045 --rc genhtml_function_coverage=1 00:05:11.045 --rc genhtml_legend=1 00:05:11.045 --rc geninfo_all_blocks=1 00:05:11.045 --rc geninfo_unexecuted_blocks=1 00:05:11.045 00:05:11.045 ' 00:05:11.045 14:09:38 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:11.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.045 --rc genhtml_branch_coverage=1 00:05:11.045 --rc genhtml_function_coverage=1 00:05:11.045 --rc genhtml_legend=1 00:05:11.045 --rc geninfo_all_blocks=1 00:05:11.045 --rc geninfo_unexecuted_blocks=1 00:05:11.045 00:05:11.045 ' 00:05:11.045 14:09:38 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:11.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.045 --rc genhtml_branch_coverage=1 00:05:11.045 --rc genhtml_function_coverage=1 00:05:11.045 --rc genhtml_legend=1 00:05:11.045 --rc geninfo_all_blocks=1 00:05:11.045 --rc geninfo_unexecuted_blocks=1 00:05:11.045 00:05:11.045 ' 00:05:11.045 14:09:38 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:11.045 14:09:38 -- nvmf/common.sh@7 -- # uname -s 00:05:11.045 14:09:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:11.045 14:09:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:11.045 14:09:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:11.045 14:09:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:11.045 14:09:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:11.045 14:09:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:11.045 14:09:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:11.045 14:09:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:11.045 14:09:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:11.045 14:09:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:11.045 14:09:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:05:11.045 14:09:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:05:11.045 14:09:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:11.045 14:09:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:11.045 14:09:38 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:05:11.045 14:09:38 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:11.045 14:09:38 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:11.045 14:09:38 -- scripts/common.sh@15 -- # shopt -s extglob 00:05:11.045 14:09:38 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:11.045 14:09:38 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:11.045 14:09:38 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:11.045 14:09:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:11.045 14:09:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:11.045 14:09:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:11.045 14:09:38 -- paths/export.sh@5 -- # export PATH 00:05:11.045 14:09:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:11.045 14:09:38 -- nvmf/common.sh@51 -- # : 0 00:05:11.045 14:09:38 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:11.045 14:09:38 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:11.045 14:09:38 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:11.045 14:09:38 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:11.045 14:09:38 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:11.045 14:09:38 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:11.045 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:11.045 14:09:38 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:11.045 14:09:38 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:11.045 14:09:38 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:11.045 14:09:38 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:11.045 14:09:38 -- spdk/autotest.sh@32 -- # uname -s 00:05:11.045 14:09:38 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:11.045 14:09:38 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:11.045 14:09:38 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:11.045 14:09:38 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:05:11.045 14:09:38 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:11.045 14:09:38 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:11.045 14:09:38 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:11.045 14:09:38 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:11.045 14:09:38 -- spdk/autotest.sh@48 -- # udevadm_pid=55030 00:05:11.045 14:09:38 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:11.045 14:09:38 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:11.045 14:09:38 -- pm/common@17 -- # local monitor 00:05:11.045 14:09:38 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:11.045 14:09:38 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:11.045 14:09:38 -- pm/common@25 -- # sleep 1 00:05:11.045 14:09:38 -- pm/common@21 -- # date +%s 00:05:11.045 14:09:38 -- pm/common@21 -- # date +%s 00:05:11.045 14:09:38 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1730902178 00:05:11.045 14:09:38 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1730902178 00:05:11.045 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1730902178_collect-vmstat.pm.log 00:05:11.045 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1730902178_collect-cpu-load.pm.log 00:05:11.981 14:09:39 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:11.981 14:09:39 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:11.981 14:09:39 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:11.981 14:09:39 -- common/autotest_common.sh@10 -- # set +x 00:05:11.981 14:09:39 -- spdk/autotest.sh@59 -- # create_test_list 00:05:11.981 14:09:39 -- common/autotest_common.sh@750 -- # xtrace_disable 00:05:11.981 14:09:39 -- common/autotest_common.sh@10 -- # set +x 00:05:12.244 14:09:39 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:05:12.244 14:09:39 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:05:12.244 14:09:39 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:05:12.244 14:09:39 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:05:12.244 14:09:39 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:05:12.244 14:09:39 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:12.244 14:09:39 -- common/autotest_common.sh@1455 -- # uname 00:05:12.244 14:09:39 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:05:12.244 14:09:39 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:12.244 14:09:39 -- common/autotest_common.sh@1475 -- # uname 00:05:12.244 14:09:39 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:05:12.244 14:09:39 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:12.244 14:09:39 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:12.244 lcov: LCOV version 1.15 00:05:12.244 14:09:39 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:30.345 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:30.345 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:48.422 14:10:13 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:48.422 14:10:13 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:48.422 14:10:13 -- common/autotest_common.sh@10 -- # set +x 00:05:48.422 14:10:13 -- spdk/autotest.sh@78 -- # rm -f 00:05:48.422 14:10:13 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:48.422 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:48.422 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:48.422 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:48.422 14:10:14 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:48.422 14:10:14 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:05:48.422 14:10:14 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:05:48.422 14:10:14 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:05:48.422 14:10:14 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:48.422 14:10:14 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:05:48.422 14:10:14 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:05:48.422 14:10:14 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:48.422 14:10:14 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:48.422 14:10:14 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:48.422 14:10:14 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:05:48.422 14:10:14 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:05:48.422 14:10:14 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:48.422 14:10:14 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:48.422 14:10:14 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:48.422 14:10:14 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:05:48.422 14:10:14 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:05:48.422 14:10:14 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:48.422 14:10:14 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:48.422 14:10:14 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:48.422 14:10:14 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:05:48.422 14:10:14 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:05:48.422 14:10:14 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:48.422 14:10:14 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:48.422 14:10:14 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:48.422 14:10:14 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:48.422 14:10:14 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:48.422 14:10:14 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:48.422 14:10:14 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:48.422 14:10:14 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:48.422 No valid GPT data, bailing 00:05:48.422 14:10:14 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:48.422 14:10:14 -- scripts/common.sh@394 -- # pt= 00:05:48.423 14:10:14 -- scripts/common.sh@395 -- # return 1 00:05:48.423 14:10:14 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:48.423 1+0 records in 00:05:48.423 1+0 records out 00:05:48.423 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00612758 s, 171 MB/s 00:05:48.423 14:10:14 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:48.423 14:10:14 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:48.423 14:10:14 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:48.423 14:10:14 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:48.423 14:10:14 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:48.423 No valid GPT data, bailing 00:05:48.423 14:10:14 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:48.423 14:10:14 -- scripts/common.sh@394 -- # pt= 00:05:48.423 14:10:14 -- scripts/common.sh@395 -- # return 1 00:05:48.423 14:10:14 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:48.423 1+0 records in 00:05:48.423 1+0 records out 00:05:48.423 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00344277 s, 305 MB/s 00:05:48.423 14:10:14 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:48.423 14:10:14 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:48.423 14:10:14 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:05:48.423 14:10:14 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:05:48.423 14:10:14 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:48.423 No valid GPT data, bailing 00:05:48.423 14:10:15 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:48.423 14:10:15 -- scripts/common.sh@394 -- # pt= 00:05:48.423 14:10:15 -- scripts/common.sh@395 -- # return 1 00:05:48.423 14:10:15 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:48.423 1+0 records in 00:05:48.423 1+0 records out 00:05:48.423 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00445803 s, 235 MB/s 00:05:48.423 14:10:15 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:48.423 14:10:15 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:48.423 14:10:15 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:05:48.423 14:10:15 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:05:48.423 14:10:15 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:48.423 No valid GPT data, bailing 00:05:48.423 14:10:15 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:48.423 14:10:15 -- scripts/common.sh@394 -- # pt= 00:05:48.423 14:10:15 -- scripts/common.sh@395 -- # return 1 00:05:48.423 14:10:15 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:48.423 1+0 records in 00:05:48.423 1+0 records out 00:05:48.423 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00542501 s, 193 MB/s 00:05:48.423 14:10:15 -- spdk/autotest.sh@105 -- # sync 00:05:48.423 14:10:15 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:48.423 14:10:15 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:48.423 14:10:15 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:50.958 14:10:18 -- spdk/autotest.sh@111 -- # uname -s 00:05:50.958 14:10:18 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:50.958 14:10:18 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:50.958 14:10:18 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:51.525 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:51.525 Hugepages 00:05:51.525 node hugesize free / total 00:05:51.525 node0 1048576kB 0 / 0 00:05:51.525 node0 2048kB 0 / 0 00:05:51.525 00:05:51.525 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:51.525 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:51.783 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:51.784 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:51.784 14:10:19 -- spdk/autotest.sh@117 -- # uname -s 00:05:51.784 14:10:19 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:51.784 14:10:19 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:51.784 14:10:19 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:52.734 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:52.734 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:52.994 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:52.994 14:10:20 -- common/autotest_common.sh@1515 -- # sleep 1 00:05:53.931 14:10:21 -- common/autotest_common.sh@1516 -- # bdfs=() 00:05:53.931 14:10:21 -- common/autotest_common.sh@1516 -- # local bdfs 00:05:53.931 14:10:21 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:05:53.931 14:10:21 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:05:53.931 14:10:21 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:53.931 14:10:21 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:53.931 14:10:21 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:53.931 14:10:21 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:53.931 14:10:21 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:53.931 14:10:21 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:05:53.931 14:10:21 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:53.931 14:10:21 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:54.500 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:54.500 Waiting for block devices as requested 00:05:54.758 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:54.758 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:54.758 14:10:22 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:54.758 14:10:22 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:54.759 14:10:22 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:54.759 14:10:22 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:05:54.759 14:10:22 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:54.759 14:10:22 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:54.759 14:10:22 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:55.017 14:10:22 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:05:55.017 14:10:22 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:05:55.017 14:10:22 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:05:55.017 14:10:22 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:05:55.017 14:10:22 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:55.017 14:10:22 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:55.017 14:10:22 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:55.017 14:10:22 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:55.017 14:10:22 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:55.017 14:10:22 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:05:55.017 14:10:22 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:55.017 14:10:22 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:55.017 14:10:22 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:55.018 14:10:22 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:55.018 14:10:22 -- common/autotest_common.sh@1541 -- # continue 00:05:55.018 14:10:22 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:55.018 14:10:22 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:55.018 14:10:22 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:55.018 14:10:22 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:05:55.018 14:10:22 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:55.018 14:10:22 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:55.018 14:10:22 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:55.018 14:10:22 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:05:55.018 14:10:22 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:05:55.018 14:10:22 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:05:55.018 14:10:22 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:05:55.018 14:10:22 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:55.018 14:10:22 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:55.018 14:10:22 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:55.018 14:10:22 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:55.018 14:10:22 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:55.018 14:10:22 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:05:55.018 14:10:22 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:55.018 14:10:22 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:55.018 14:10:22 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:55.018 14:10:22 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:55.018 14:10:22 -- common/autotest_common.sh@1541 -- # continue 00:05:55.018 14:10:22 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:55.018 14:10:22 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:55.018 14:10:22 -- common/autotest_common.sh@10 -- # set +x 00:05:55.018 14:10:22 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:55.018 14:10:22 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:55.018 14:10:22 -- common/autotest_common.sh@10 -- # set +x 00:05:55.018 14:10:22 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:55.952 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:55.952 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:55.952 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:56.210 14:10:23 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:56.210 14:10:23 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:56.210 14:10:23 -- common/autotest_common.sh@10 -- # set +x 00:05:56.210 14:10:23 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:56.210 14:10:23 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:56.210 14:10:23 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:56.210 14:10:23 -- common/autotest_common.sh@1561 -- # bdfs=() 00:05:56.210 14:10:23 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:05:56.210 14:10:23 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:05:56.210 14:10:23 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:05:56.210 14:10:23 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:05:56.210 14:10:23 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:56.210 14:10:23 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:56.210 14:10:23 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:56.210 14:10:23 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:56.210 14:10:23 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:56.210 14:10:23 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:05:56.210 14:10:23 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:56.210 14:10:23 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:56.210 14:10:23 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:56.210 14:10:23 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:56.210 14:10:23 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:56.210 14:10:23 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:56.210 14:10:23 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:56.210 14:10:23 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:56.210 14:10:23 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:56.210 14:10:23 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:05:56.210 14:10:23 -- common/autotest_common.sh@1570 -- # return 0 00:05:56.210 14:10:23 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:05:56.210 14:10:23 -- common/autotest_common.sh@1578 -- # return 0 00:05:56.210 14:10:23 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:56.210 14:10:23 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:56.210 14:10:23 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:56.210 14:10:23 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:56.210 14:10:23 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:56.210 14:10:23 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:56.210 14:10:23 -- common/autotest_common.sh@10 -- # set +x 00:05:56.210 14:10:23 -- spdk/autotest.sh@151 -- # [[ 1 -eq 1 ]] 00:05:56.210 14:10:23 -- spdk/autotest.sh@152 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:05:56.210 14:10:23 -- spdk/autotest.sh@152 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:05:56.210 14:10:23 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:56.210 14:10:23 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:56.210 14:10:23 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:56.210 14:10:23 -- common/autotest_common.sh@10 -- # set +x 00:05:56.469 ************************************ 00:05:56.469 START TEST env 00:05:56.469 ************************************ 00:05:56.469 14:10:23 env -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:56.469 * Looking for test storage... 00:05:56.469 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:56.469 14:10:23 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:56.469 14:10:23 env -- common/autotest_common.sh@1691 -- # lcov --version 00:05:56.469 14:10:23 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:56.469 14:10:24 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:56.469 14:10:24 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:56.469 14:10:24 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:56.469 14:10:24 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:56.469 14:10:24 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:56.469 14:10:24 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:56.469 14:10:24 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:56.469 14:10:24 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:56.469 14:10:24 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:56.469 14:10:24 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:56.469 14:10:24 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:56.469 14:10:24 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:56.469 14:10:24 env -- scripts/common.sh@344 -- # case "$op" in 00:05:56.469 14:10:24 env -- scripts/common.sh@345 -- # : 1 00:05:56.469 14:10:24 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:56.469 14:10:24 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:56.469 14:10:24 env -- scripts/common.sh@365 -- # decimal 1 00:05:56.469 14:10:24 env -- scripts/common.sh@353 -- # local d=1 00:05:56.469 14:10:24 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:56.469 14:10:24 env -- scripts/common.sh@355 -- # echo 1 00:05:56.469 14:10:24 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:56.469 14:10:24 env -- scripts/common.sh@366 -- # decimal 2 00:05:56.469 14:10:24 env -- scripts/common.sh@353 -- # local d=2 00:05:56.469 14:10:24 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:56.469 14:10:24 env -- scripts/common.sh@355 -- # echo 2 00:05:56.469 14:10:24 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:56.469 14:10:24 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:56.469 14:10:24 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:56.469 14:10:24 env -- scripts/common.sh@368 -- # return 0 00:05:56.469 14:10:24 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:56.469 14:10:24 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:56.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.469 --rc genhtml_branch_coverage=1 00:05:56.469 --rc genhtml_function_coverage=1 00:05:56.469 --rc genhtml_legend=1 00:05:56.469 --rc geninfo_all_blocks=1 00:05:56.469 --rc geninfo_unexecuted_blocks=1 00:05:56.469 00:05:56.469 ' 00:05:56.469 14:10:24 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:56.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.469 --rc genhtml_branch_coverage=1 00:05:56.469 --rc genhtml_function_coverage=1 00:05:56.469 --rc genhtml_legend=1 00:05:56.469 --rc geninfo_all_blocks=1 00:05:56.469 --rc geninfo_unexecuted_blocks=1 00:05:56.469 00:05:56.469 ' 00:05:56.469 14:10:24 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:56.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.469 --rc genhtml_branch_coverage=1 00:05:56.469 --rc genhtml_function_coverage=1 00:05:56.469 --rc genhtml_legend=1 00:05:56.469 --rc geninfo_all_blocks=1 00:05:56.469 --rc geninfo_unexecuted_blocks=1 00:05:56.469 00:05:56.469 ' 00:05:56.469 14:10:24 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:56.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.469 --rc genhtml_branch_coverage=1 00:05:56.469 --rc genhtml_function_coverage=1 00:05:56.469 --rc genhtml_legend=1 00:05:56.469 --rc geninfo_all_blocks=1 00:05:56.469 --rc geninfo_unexecuted_blocks=1 00:05:56.469 00:05:56.469 ' 00:05:56.469 14:10:24 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:56.469 14:10:24 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:56.469 14:10:24 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:56.469 14:10:24 env -- common/autotest_common.sh@10 -- # set +x 00:05:56.727 ************************************ 00:05:56.727 START TEST env_memory 00:05:56.727 ************************************ 00:05:56.727 14:10:24 env.env_memory -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:56.727 00:05:56.727 00:05:56.727 CUnit - A unit testing framework for C - Version 2.1-3 00:05:56.727 http://cunit.sourceforge.net/ 00:05:56.727 00:05:56.727 00:05:56.727 Suite: memory 00:05:56.727 Test: alloc and free memory map ...[2024-11-06 14:10:24.177163] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:56.728 passed 00:05:56.728 Test: mem map translation ...[2024-11-06 14:10:24.222083] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:56.728 [2024-11-06 14:10:24.222225] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:56.728 [2024-11-06 14:10:24.222377] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:56.728 [2024-11-06 14:10:24.222469] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:56.728 passed 00:05:56.728 Test: mem map registration ...[2024-11-06 14:10:24.290721] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:56.728 [2024-11-06 14:10:24.290879] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:56.728 passed 00:05:56.986 Test: mem map adjacent registrations ...passed 00:05:56.986 00:05:56.986 Run Summary: Type Total Ran Passed Failed Inactive 00:05:56.986 suites 1 1 n/a 0 0 00:05:56.986 tests 4 4 4 0 0 00:05:56.986 asserts 152 152 152 0 n/a 00:05:56.986 00:05:56.986 Elapsed time = 0.243 seconds 00:05:56.986 00:05:56.986 real 0m0.300s 00:05:56.986 ************************************ 00:05:56.986 END TEST env_memory 00:05:56.986 ************************************ 00:05:56.986 user 0m0.254s 00:05:56.986 sys 0m0.035s 00:05:56.986 14:10:24 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:56.986 14:10:24 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:56.986 14:10:24 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:56.986 14:10:24 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:56.986 14:10:24 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:56.986 14:10:24 env -- common/autotest_common.sh@10 -- # set +x 00:05:56.986 ************************************ 00:05:56.986 START TEST env_vtophys 00:05:56.986 ************************************ 00:05:56.986 14:10:24 env.env_vtophys -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:56.986 EAL: lib.eal log level changed from notice to debug 00:05:56.986 EAL: Detected lcore 0 as core 0 on socket 0 00:05:56.986 EAL: Detected lcore 1 as core 0 on socket 0 00:05:56.986 EAL: Detected lcore 2 as core 0 on socket 0 00:05:56.986 EAL: Detected lcore 3 as core 0 on socket 0 00:05:56.986 EAL: Detected lcore 4 as core 0 on socket 0 00:05:56.986 EAL: Detected lcore 5 as core 0 on socket 0 00:05:56.986 EAL: Detected lcore 6 as core 0 on socket 0 00:05:56.986 EAL: Detected lcore 7 as core 0 on socket 0 00:05:56.986 EAL: Detected lcore 8 as core 0 on socket 0 00:05:56.986 EAL: Detected lcore 9 as core 0 on socket 0 00:05:56.986 EAL: Maximum logical cores by configuration: 128 00:05:56.986 EAL: Detected CPU lcores: 10 00:05:56.986 EAL: Detected NUMA nodes: 1 00:05:56.986 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:56.986 EAL: Detected shared linkage of DPDK 00:05:56.986 EAL: No shared files mode enabled, IPC will be disabled 00:05:56.987 EAL: Selected IOVA mode 'PA' 00:05:56.987 EAL: Probing VFIO support... 00:05:56.987 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:56.987 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:56.987 EAL: Ask a virtual area of 0x2e000 bytes 00:05:56.987 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:56.987 EAL: Setting up physically contiguous memory... 00:05:56.987 EAL: Setting maximum number of open files to 524288 00:05:56.987 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:56.987 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:56.987 EAL: Ask a virtual area of 0x61000 bytes 00:05:56.987 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:56.987 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:56.987 EAL: Ask a virtual area of 0x400000000 bytes 00:05:56.987 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:56.987 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:56.987 EAL: Ask a virtual area of 0x61000 bytes 00:05:56.987 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:56.987 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:56.987 EAL: Ask a virtual area of 0x400000000 bytes 00:05:56.987 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:56.987 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:56.987 EAL: Ask a virtual area of 0x61000 bytes 00:05:56.987 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:56.987 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:56.987 EAL: Ask a virtual area of 0x400000000 bytes 00:05:56.987 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:56.987 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:56.987 EAL: Ask a virtual area of 0x61000 bytes 00:05:56.987 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:56.987 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:56.987 EAL: Ask a virtual area of 0x400000000 bytes 00:05:56.987 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:56.987 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:56.987 EAL: Hugepages will be freed exactly as allocated. 00:05:56.987 EAL: No shared files mode enabled, IPC is disabled 00:05:56.987 EAL: No shared files mode enabled, IPC is disabled 00:05:57.246 EAL: TSC frequency is ~2490000 KHz 00:05:57.246 EAL: Main lcore 0 is ready (tid=7f257c211a40;cpuset=[0]) 00:05:57.246 EAL: Trying to obtain current memory policy. 00:05:57.246 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:57.246 EAL: Restoring previous memory policy: 0 00:05:57.246 EAL: request: mp_malloc_sync 00:05:57.246 EAL: No shared files mode enabled, IPC is disabled 00:05:57.246 EAL: Heap on socket 0 was expanded by 2MB 00:05:57.246 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:57.246 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:57.246 EAL: Mem event callback 'spdk:(nil)' registered 00:05:57.246 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:57.246 00:05:57.246 00:05:57.246 CUnit - A unit testing framework for C - Version 2.1-3 00:05:57.246 http://cunit.sourceforge.net/ 00:05:57.246 00:05:57.246 00:05:57.246 Suite: components_suite 00:05:57.813 Test: vtophys_malloc_test ...passed 00:05:57.813 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:57.813 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:57.813 EAL: Restoring previous memory policy: 4 00:05:57.813 EAL: Calling mem event callback 'spdk:(nil)' 00:05:57.813 EAL: request: mp_malloc_sync 00:05:57.813 EAL: No shared files mode enabled, IPC is disabled 00:05:57.813 EAL: Heap on socket 0 was expanded by 4MB 00:05:57.813 EAL: Calling mem event callback 'spdk:(nil)' 00:05:57.813 EAL: request: mp_malloc_sync 00:05:57.813 EAL: No shared files mode enabled, IPC is disabled 00:05:57.813 EAL: Heap on socket 0 was shrunk by 4MB 00:05:57.813 EAL: Trying to obtain current memory policy. 00:05:57.813 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:57.813 EAL: Restoring previous memory policy: 4 00:05:57.813 EAL: Calling mem event callback 'spdk:(nil)' 00:05:57.813 EAL: request: mp_malloc_sync 00:05:57.813 EAL: No shared files mode enabled, IPC is disabled 00:05:57.813 EAL: Heap on socket 0 was expanded by 6MB 00:05:57.813 EAL: Calling mem event callback 'spdk:(nil)' 00:05:57.813 EAL: request: mp_malloc_sync 00:05:57.813 EAL: No shared files mode enabled, IPC is disabled 00:05:57.813 EAL: Heap on socket 0 was shrunk by 6MB 00:05:57.813 EAL: Trying to obtain current memory policy. 00:05:57.813 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:57.813 EAL: Restoring previous memory policy: 4 00:05:57.813 EAL: Calling mem event callback 'spdk:(nil)' 00:05:57.813 EAL: request: mp_malloc_sync 00:05:57.813 EAL: No shared files mode enabled, IPC is disabled 00:05:57.813 EAL: Heap on socket 0 was expanded by 10MB 00:05:57.813 EAL: Calling mem event callback 'spdk:(nil)' 00:05:57.813 EAL: request: mp_malloc_sync 00:05:57.813 EAL: No shared files mode enabled, IPC is disabled 00:05:57.813 EAL: Heap on socket 0 was shrunk by 10MB 00:05:57.813 EAL: Trying to obtain current memory policy. 00:05:57.813 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:57.813 EAL: Restoring previous memory policy: 4 00:05:57.813 EAL: Calling mem event callback 'spdk:(nil)' 00:05:57.813 EAL: request: mp_malloc_sync 00:05:57.813 EAL: No shared files mode enabled, IPC is disabled 00:05:57.813 EAL: Heap on socket 0 was expanded by 18MB 00:05:57.813 EAL: Calling mem event callback 'spdk:(nil)' 00:05:57.813 EAL: request: mp_malloc_sync 00:05:57.813 EAL: No shared files mode enabled, IPC is disabled 00:05:57.813 EAL: Heap on socket 0 was shrunk by 18MB 00:05:57.813 EAL: Trying to obtain current memory policy. 00:05:57.813 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:57.813 EAL: Restoring previous memory policy: 4 00:05:57.813 EAL: Calling mem event callback 'spdk:(nil)' 00:05:57.813 EAL: request: mp_malloc_sync 00:05:57.813 EAL: No shared files mode enabled, IPC is disabled 00:05:57.813 EAL: Heap on socket 0 was expanded by 34MB 00:05:58.071 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.071 EAL: request: mp_malloc_sync 00:05:58.071 EAL: No shared files mode enabled, IPC is disabled 00:05:58.071 EAL: Heap on socket 0 was shrunk by 34MB 00:05:58.071 EAL: Trying to obtain current memory policy. 00:05:58.071 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:58.071 EAL: Restoring previous memory policy: 4 00:05:58.071 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.071 EAL: request: mp_malloc_sync 00:05:58.071 EAL: No shared files mode enabled, IPC is disabled 00:05:58.071 EAL: Heap on socket 0 was expanded by 66MB 00:05:58.071 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.071 EAL: request: mp_malloc_sync 00:05:58.072 EAL: No shared files mode enabled, IPC is disabled 00:05:58.072 EAL: Heap on socket 0 was shrunk by 66MB 00:05:58.329 EAL: Trying to obtain current memory policy. 00:05:58.329 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:58.329 EAL: Restoring previous memory policy: 4 00:05:58.329 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.329 EAL: request: mp_malloc_sync 00:05:58.329 EAL: No shared files mode enabled, IPC is disabled 00:05:58.329 EAL: Heap on socket 0 was expanded by 130MB 00:05:58.587 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.587 EAL: request: mp_malloc_sync 00:05:58.587 EAL: No shared files mode enabled, IPC is disabled 00:05:58.587 EAL: Heap on socket 0 was shrunk by 130MB 00:05:58.845 EAL: Trying to obtain current memory policy. 00:05:58.845 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:58.845 EAL: Restoring previous memory policy: 4 00:05:58.845 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.845 EAL: request: mp_malloc_sync 00:05:58.845 EAL: No shared files mode enabled, IPC is disabled 00:05:58.845 EAL: Heap on socket 0 was expanded by 258MB 00:05:59.412 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.412 EAL: request: mp_malloc_sync 00:05:59.412 EAL: No shared files mode enabled, IPC is disabled 00:05:59.412 EAL: Heap on socket 0 was shrunk by 258MB 00:05:59.671 EAL: Trying to obtain current memory policy. 00:05:59.671 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:59.930 EAL: Restoring previous memory policy: 4 00:05:59.930 EAL: Calling mem event callback 'spdk:(nil)' 00:05:59.930 EAL: request: mp_malloc_sync 00:05:59.930 EAL: No shared files mode enabled, IPC is disabled 00:05:59.930 EAL: Heap on socket 0 was expanded by 514MB 00:06:00.865 EAL: Calling mem event callback 'spdk:(nil)' 00:06:00.865 EAL: request: mp_malloc_sync 00:06:00.865 EAL: No shared files mode enabled, IPC is disabled 00:06:00.865 EAL: Heap on socket 0 was shrunk by 514MB 00:06:01.801 EAL: Trying to obtain current memory policy. 00:06:01.801 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:01.801 EAL: Restoring previous memory policy: 4 00:06:01.801 EAL: Calling mem event callback 'spdk:(nil)' 00:06:01.801 EAL: request: mp_malloc_sync 00:06:01.801 EAL: No shared files mode enabled, IPC is disabled 00:06:01.801 EAL: Heap on socket 0 was expanded by 1026MB 00:06:03.701 EAL: Calling mem event callback 'spdk:(nil)' 00:06:03.959 EAL: request: mp_malloc_sync 00:06:03.959 EAL: No shared files mode enabled, IPC is disabled 00:06:03.959 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:05.875 passed 00:06:05.875 00:06:05.875 Run Summary: Type Total Ran Passed Failed Inactive 00:06:05.875 suites 1 1 n/a 0 0 00:06:05.875 tests 2 2 2 0 0 00:06:05.875 asserts 5831 5831 5831 0 n/a 00:06:05.875 00:06:05.875 Elapsed time = 8.422 seconds 00:06:05.875 EAL: Calling mem event callback 'spdk:(nil)' 00:06:05.875 EAL: request: mp_malloc_sync 00:06:05.875 EAL: No shared files mode enabled, IPC is disabled 00:06:05.875 EAL: Heap on socket 0 was shrunk by 2MB 00:06:05.875 EAL: No shared files mode enabled, IPC is disabled 00:06:05.875 EAL: No shared files mode enabled, IPC is disabled 00:06:05.875 EAL: No shared files mode enabled, IPC is disabled 00:06:05.875 00:06:05.875 real 0m8.806s 00:06:05.875 user 0m7.636s 00:06:05.875 sys 0m1.005s 00:06:05.875 14:10:33 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:05.875 14:10:33 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:05.875 ************************************ 00:06:05.875 END TEST env_vtophys 00:06:05.875 ************************************ 00:06:05.876 14:10:33 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:05.876 14:10:33 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:05.876 14:10:33 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:05.876 14:10:33 env -- common/autotest_common.sh@10 -- # set +x 00:06:05.876 ************************************ 00:06:05.876 START TEST env_pci 00:06:05.876 ************************************ 00:06:05.876 14:10:33 env.env_pci -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:05.876 00:06:05.876 00:06:05.876 CUnit - A unit testing framework for C - Version 2.1-3 00:06:05.876 http://cunit.sourceforge.net/ 00:06:05.876 00:06:05.876 00:06:05.876 Suite: pci 00:06:05.876 Test: pci_hook ...[2024-11-06 14:10:33.402965] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57422 has claimed it 00:06:05.876 passed 00:06:05.876 00:06:05.876 Run Summary: Type Total Ran Passed Failed Inactive 00:06:05.876 suites 1 1 n/a 0 0 00:06:05.876 tests 1 1 1 0 0 00:06:05.876 asserts 25 25 25 0 n/a 00:06:05.876 00:06:05.876 Elapsed time = 0.008 seconds 00:06:05.876 EAL: Cannot find device (10000:00:01.0) 00:06:05.876 EAL: Failed to attach device on primary process 00:06:05.876 00:06:05.876 real 0m0.098s 00:06:05.876 user 0m0.030s 00:06:05.876 sys 0m0.067s 00:06:05.876 14:10:33 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:05.876 14:10:33 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:05.876 ************************************ 00:06:05.876 END TEST env_pci 00:06:05.876 ************************************ 00:06:06.134 14:10:33 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:06.134 14:10:33 env -- env/env.sh@15 -- # uname 00:06:06.134 14:10:33 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:06.134 14:10:33 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:06.134 14:10:33 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:06.134 14:10:33 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:06:06.134 14:10:33 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:06.134 14:10:33 env -- common/autotest_common.sh@10 -- # set +x 00:06:06.134 ************************************ 00:06:06.134 START TEST env_dpdk_post_init 00:06:06.134 ************************************ 00:06:06.134 14:10:33 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:06.134 EAL: Detected CPU lcores: 10 00:06:06.134 EAL: Detected NUMA nodes: 1 00:06:06.134 EAL: Detected shared linkage of DPDK 00:06:06.134 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:06.134 EAL: Selected IOVA mode 'PA' 00:06:06.134 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:06.393 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:06:06.393 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:06:06.393 Starting DPDK initialization... 00:06:06.393 Starting SPDK post initialization... 00:06:06.393 SPDK NVMe probe 00:06:06.393 Attaching to 0000:00:10.0 00:06:06.393 Attaching to 0000:00:11.0 00:06:06.393 Attached to 0000:00:10.0 00:06:06.393 Attached to 0000:00:11.0 00:06:06.393 Cleaning up... 00:06:06.393 00:06:06.393 real 0m0.304s 00:06:06.393 user 0m0.096s 00:06:06.393 sys 0m0.109s 00:06:06.393 14:10:33 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:06.393 14:10:33 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:06.393 ************************************ 00:06:06.393 END TEST env_dpdk_post_init 00:06:06.393 ************************************ 00:06:06.393 14:10:33 env -- env/env.sh@26 -- # uname 00:06:06.393 14:10:33 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:06.393 14:10:33 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:06.393 14:10:33 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:06.393 14:10:33 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:06.393 14:10:33 env -- common/autotest_common.sh@10 -- # set +x 00:06:06.393 ************************************ 00:06:06.393 START TEST env_mem_callbacks 00:06:06.393 ************************************ 00:06:06.393 14:10:33 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:06.393 EAL: Detected CPU lcores: 10 00:06:06.393 EAL: Detected NUMA nodes: 1 00:06:06.393 EAL: Detected shared linkage of DPDK 00:06:06.393 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:06.393 EAL: Selected IOVA mode 'PA' 00:06:06.652 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:06.652 00:06:06.652 00:06:06.652 CUnit - A unit testing framework for C - Version 2.1-3 00:06:06.652 http://cunit.sourceforge.net/ 00:06:06.652 00:06:06.652 00:06:06.652 Suite: memory 00:06:06.652 Test: test ... 00:06:06.652 register 0x200000200000 2097152 00:06:06.652 malloc 3145728 00:06:06.652 register 0x200000400000 4194304 00:06:06.652 buf 0x2000004fffc0 len 3145728 PASSED 00:06:06.652 malloc 64 00:06:06.652 buf 0x2000004ffec0 len 64 PASSED 00:06:06.652 malloc 4194304 00:06:06.652 register 0x200000800000 6291456 00:06:06.652 buf 0x2000009fffc0 len 4194304 PASSED 00:06:06.652 free 0x2000004fffc0 3145728 00:06:06.652 free 0x2000004ffec0 64 00:06:06.652 unregister 0x200000400000 4194304 PASSED 00:06:06.652 free 0x2000009fffc0 4194304 00:06:06.652 unregister 0x200000800000 6291456 PASSED 00:06:06.652 malloc 8388608 00:06:06.652 register 0x200000400000 10485760 00:06:06.652 buf 0x2000005fffc0 len 8388608 PASSED 00:06:06.652 free 0x2000005fffc0 8388608 00:06:06.652 unregister 0x200000400000 10485760 PASSED 00:06:06.652 passed 00:06:06.652 00:06:06.652 Run Summary: Type Total Ran Passed Failed Inactive 00:06:06.652 suites 1 1 n/a 0 0 00:06:06.652 tests 1 1 1 0 0 00:06:06.652 asserts 15 15 15 0 n/a 00:06:06.652 00:06:06.652 Elapsed time = 0.074 seconds 00:06:06.652 00:06:06.652 real 0m0.289s 00:06:06.652 user 0m0.115s 00:06:06.652 sys 0m0.073s 00:06:06.652 14:10:34 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:06.652 ************************************ 00:06:06.652 END TEST env_mem_callbacks 00:06:06.652 ************************************ 00:06:06.652 14:10:34 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:06.652 00:06:06.652 real 0m10.417s 00:06:06.652 user 0m8.365s 00:06:06.652 sys 0m1.682s 00:06:06.652 14:10:34 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:06.652 14:10:34 env -- common/autotest_common.sh@10 -- # set +x 00:06:06.652 ************************************ 00:06:06.652 END TEST env 00:06:06.652 ************************************ 00:06:06.910 14:10:34 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:06.910 14:10:34 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:06.910 14:10:34 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:06.910 14:10:34 -- common/autotest_common.sh@10 -- # set +x 00:06:06.910 ************************************ 00:06:06.910 START TEST rpc 00:06:06.910 ************************************ 00:06:06.910 14:10:34 rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:06.910 * Looking for test storage... 00:06:06.910 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:06.910 14:10:34 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:06.910 14:10:34 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:06:06.910 14:10:34 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:07.169 14:10:34 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:07.169 14:10:34 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:07.169 14:10:34 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:07.169 14:10:34 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:07.169 14:10:34 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:07.169 14:10:34 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:07.169 14:10:34 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:07.169 14:10:34 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:07.169 14:10:34 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:07.169 14:10:34 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:07.169 14:10:34 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:07.169 14:10:34 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:07.169 14:10:34 rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:07.169 14:10:34 rpc -- scripts/common.sh@345 -- # : 1 00:06:07.169 14:10:34 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:07.169 14:10:34 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:07.169 14:10:34 rpc -- scripts/common.sh@365 -- # decimal 1 00:06:07.169 14:10:34 rpc -- scripts/common.sh@353 -- # local d=1 00:06:07.169 14:10:34 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:07.169 14:10:34 rpc -- scripts/common.sh@355 -- # echo 1 00:06:07.169 14:10:34 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:07.169 14:10:34 rpc -- scripts/common.sh@366 -- # decimal 2 00:06:07.169 14:10:34 rpc -- scripts/common.sh@353 -- # local d=2 00:06:07.169 14:10:34 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:07.169 14:10:34 rpc -- scripts/common.sh@355 -- # echo 2 00:06:07.169 14:10:34 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:07.169 14:10:34 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:07.169 14:10:34 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:07.169 14:10:34 rpc -- scripts/common.sh@368 -- # return 0 00:06:07.169 14:10:34 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:07.169 14:10:34 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:07.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.169 --rc genhtml_branch_coverage=1 00:06:07.169 --rc genhtml_function_coverage=1 00:06:07.169 --rc genhtml_legend=1 00:06:07.169 --rc geninfo_all_blocks=1 00:06:07.169 --rc geninfo_unexecuted_blocks=1 00:06:07.169 00:06:07.169 ' 00:06:07.169 14:10:34 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:07.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.169 --rc genhtml_branch_coverage=1 00:06:07.169 --rc genhtml_function_coverage=1 00:06:07.169 --rc genhtml_legend=1 00:06:07.169 --rc geninfo_all_blocks=1 00:06:07.169 --rc geninfo_unexecuted_blocks=1 00:06:07.169 00:06:07.169 ' 00:06:07.169 14:10:34 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:07.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.169 --rc genhtml_branch_coverage=1 00:06:07.169 --rc genhtml_function_coverage=1 00:06:07.169 --rc genhtml_legend=1 00:06:07.169 --rc geninfo_all_blocks=1 00:06:07.169 --rc geninfo_unexecuted_blocks=1 00:06:07.169 00:06:07.169 ' 00:06:07.169 14:10:34 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:07.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.169 --rc genhtml_branch_coverage=1 00:06:07.169 --rc genhtml_function_coverage=1 00:06:07.169 --rc genhtml_legend=1 00:06:07.169 --rc geninfo_all_blocks=1 00:06:07.169 --rc geninfo_unexecuted_blocks=1 00:06:07.169 00:06:07.169 ' 00:06:07.169 14:10:34 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57549 00:06:07.169 14:10:34 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:06:07.169 14:10:34 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:07.169 14:10:34 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57549 00:06:07.169 14:10:34 rpc -- common/autotest_common.sh@833 -- # '[' -z 57549 ']' 00:06:07.169 14:10:34 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.169 14:10:34 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:07.169 14:10:34 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.169 14:10:34 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:07.169 14:10:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.169 [2024-11-06 14:10:34.733906] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:06:07.169 [2024-11-06 14:10:34.734055] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57549 ] 00:06:07.429 [2024-11-06 14:10:34.921552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.429 [2024-11-06 14:10:35.043987] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:07.429 [2024-11-06 14:10:35.044053] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57549' to capture a snapshot of events at runtime. 00:06:07.429 [2024-11-06 14:10:35.044067] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:07.429 [2024-11-06 14:10:35.044083] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:07.429 [2024-11-06 14:10:35.044094] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57549 for offline analysis/debug. 00:06:07.429 [2024-11-06 14:10:35.045536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.688 [2024-11-06 14:10:35.313660] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:08.626 14:10:35 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:08.626 14:10:35 rpc -- common/autotest_common.sh@866 -- # return 0 00:06:08.626 14:10:35 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:08.626 14:10:35 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:08.626 14:10:35 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:08.626 14:10:35 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:08.626 14:10:35 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:08.626 14:10:35 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:08.626 14:10:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.626 ************************************ 00:06:08.626 START TEST rpc_integrity 00:06:08.626 ************************************ 00:06:08.626 14:10:35 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:06:08.626 14:10:35 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:08.627 14:10:35 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.627 14:10:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:08.627 14:10:35 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.627 14:10:35 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:08.627 14:10:35 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:08.627 14:10:36 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:08.627 14:10:36 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:08.627 14:10:36 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.627 14:10:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:08.627 14:10:36 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.627 14:10:36 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:08.627 14:10:36 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:08.627 14:10:36 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.627 14:10:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:08.627 14:10:36 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.627 14:10:36 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:08.627 { 00:06:08.627 "name": "Malloc0", 00:06:08.627 "aliases": [ 00:06:08.627 "1d46fcb9-eaf4-4425-b526-4a3e5e6ac5ca" 00:06:08.627 ], 00:06:08.627 "product_name": "Malloc disk", 00:06:08.627 "block_size": 512, 00:06:08.627 "num_blocks": 16384, 00:06:08.627 "uuid": "1d46fcb9-eaf4-4425-b526-4a3e5e6ac5ca", 00:06:08.627 "assigned_rate_limits": { 00:06:08.627 "rw_ios_per_sec": 0, 00:06:08.627 "rw_mbytes_per_sec": 0, 00:06:08.627 "r_mbytes_per_sec": 0, 00:06:08.627 "w_mbytes_per_sec": 0 00:06:08.627 }, 00:06:08.627 "claimed": false, 00:06:08.627 "zoned": false, 00:06:08.627 "supported_io_types": { 00:06:08.627 "read": true, 00:06:08.627 "write": true, 00:06:08.627 "unmap": true, 00:06:08.627 "flush": true, 00:06:08.627 "reset": true, 00:06:08.627 "nvme_admin": false, 00:06:08.627 "nvme_io": false, 00:06:08.627 "nvme_io_md": false, 00:06:08.627 "write_zeroes": true, 00:06:08.627 "zcopy": true, 00:06:08.627 "get_zone_info": false, 00:06:08.627 "zone_management": false, 00:06:08.627 "zone_append": false, 00:06:08.627 "compare": false, 00:06:08.627 "compare_and_write": false, 00:06:08.627 "abort": true, 00:06:08.627 "seek_hole": false, 00:06:08.627 "seek_data": false, 00:06:08.627 "copy": true, 00:06:08.627 "nvme_iov_md": false 00:06:08.627 }, 00:06:08.627 "memory_domains": [ 00:06:08.627 { 00:06:08.627 "dma_device_id": "system", 00:06:08.627 "dma_device_type": 1 00:06:08.627 }, 00:06:08.627 { 00:06:08.627 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:08.627 "dma_device_type": 2 00:06:08.627 } 00:06:08.627 ], 00:06:08.627 "driver_specific": {} 00:06:08.627 } 00:06:08.627 ]' 00:06:08.627 14:10:36 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:08.627 14:10:36 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:08.627 14:10:36 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:08.627 14:10:36 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.627 14:10:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:08.627 [2024-11-06 14:10:36.109158] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:08.627 [2024-11-06 14:10:36.109331] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:08.627 [2024-11-06 14:10:36.109431] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:06:08.627 [2024-11-06 14:10:36.109501] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:08.627 [2024-11-06 14:10:36.112475] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:08.627 [2024-11-06 14:10:36.112614] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:08.627 Passthru0 00:06:08.627 14:10:36 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.627 14:10:36 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:08.627 14:10:36 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.627 14:10:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:08.627 14:10:36 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.627 14:10:36 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:08.627 { 00:06:08.627 "name": "Malloc0", 00:06:08.627 "aliases": [ 00:06:08.627 "1d46fcb9-eaf4-4425-b526-4a3e5e6ac5ca" 00:06:08.627 ], 00:06:08.627 "product_name": "Malloc disk", 00:06:08.627 "block_size": 512, 00:06:08.627 "num_blocks": 16384, 00:06:08.627 "uuid": "1d46fcb9-eaf4-4425-b526-4a3e5e6ac5ca", 00:06:08.627 "assigned_rate_limits": { 00:06:08.627 "rw_ios_per_sec": 0, 00:06:08.627 "rw_mbytes_per_sec": 0, 00:06:08.627 "r_mbytes_per_sec": 0, 00:06:08.627 "w_mbytes_per_sec": 0 00:06:08.627 }, 00:06:08.627 "claimed": true, 00:06:08.627 "claim_type": "exclusive_write", 00:06:08.627 "zoned": false, 00:06:08.627 "supported_io_types": { 00:06:08.627 "read": true, 00:06:08.627 "write": true, 00:06:08.627 "unmap": true, 00:06:08.627 "flush": true, 00:06:08.627 "reset": true, 00:06:08.627 "nvme_admin": false, 00:06:08.627 "nvme_io": false, 00:06:08.627 "nvme_io_md": false, 00:06:08.627 "write_zeroes": true, 00:06:08.627 "zcopy": true, 00:06:08.627 "get_zone_info": false, 00:06:08.627 "zone_management": false, 00:06:08.627 "zone_append": false, 00:06:08.627 "compare": false, 00:06:08.627 "compare_and_write": false, 00:06:08.627 "abort": true, 00:06:08.627 "seek_hole": false, 00:06:08.627 "seek_data": false, 00:06:08.627 "copy": true, 00:06:08.627 "nvme_iov_md": false 00:06:08.627 }, 00:06:08.627 "memory_domains": [ 00:06:08.627 { 00:06:08.627 "dma_device_id": "system", 00:06:08.627 "dma_device_type": 1 00:06:08.627 }, 00:06:08.627 { 00:06:08.627 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:08.627 "dma_device_type": 2 00:06:08.627 } 00:06:08.627 ], 00:06:08.627 "driver_specific": {} 00:06:08.627 }, 00:06:08.627 { 00:06:08.627 "name": "Passthru0", 00:06:08.627 "aliases": [ 00:06:08.627 "bc090df8-8972-5f23-b175-66bb95759ac3" 00:06:08.627 ], 00:06:08.627 "product_name": "passthru", 00:06:08.627 "block_size": 512, 00:06:08.627 "num_blocks": 16384, 00:06:08.627 "uuid": "bc090df8-8972-5f23-b175-66bb95759ac3", 00:06:08.627 "assigned_rate_limits": { 00:06:08.627 "rw_ios_per_sec": 0, 00:06:08.627 "rw_mbytes_per_sec": 0, 00:06:08.627 "r_mbytes_per_sec": 0, 00:06:08.627 "w_mbytes_per_sec": 0 00:06:08.627 }, 00:06:08.627 "claimed": false, 00:06:08.627 "zoned": false, 00:06:08.627 "supported_io_types": { 00:06:08.627 "read": true, 00:06:08.627 "write": true, 00:06:08.627 "unmap": true, 00:06:08.627 "flush": true, 00:06:08.627 "reset": true, 00:06:08.627 "nvme_admin": false, 00:06:08.627 "nvme_io": false, 00:06:08.627 "nvme_io_md": false, 00:06:08.627 "write_zeroes": true, 00:06:08.627 "zcopy": true, 00:06:08.627 "get_zone_info": false, 00:06:08.627 "zone_management": false, 00:06:08.627 "zone_append": false, 00:06:08.627 "compare": false, 00:06:08.627 "compare_and_write": false, 00:06:08.627 "abort": true, 00:06:08.627 "seek_hole": false, 00:06:08.627 "seek_data": false, 00:06:08.627 "copy": true, 00:06:08.627 "nvme_iov_md": false 00:06:08.627 }, 00:06:08.627 "memory_domains": [ 00:06:08.627 { 00:06:08.627 "dma_device_id": "system", 00:06:08.627 "dma_device_type": 1 00:06:08.627 }, 00:06:08.627 { 00:06:08.627 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:08.627 "dma_device_type": 2 00:06:08.627 } 00:06:08.627 ], 00:06:08.627 "driver_specific": { 00:06:08.627 "passthru": { 00:06:08.627 "name": "Passthru0", 00:06:08.627 "base_bdev_name": "Malloc0" 00:06:08.627 } 00:06:08.627 } 00:06:08.627 } 00:06:08.627 ]' 00:06:08.627 14:10:36 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:08.627 14:10:36 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:08.627 14:10:36 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:08.627 14:10:36 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.627 14:10:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:08.627 14:10:36 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.627 14:10:36 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:08.627 14:10:36 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.627 14:10:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:08.627 14:10:36 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.627 14:10:36 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:08.627 14:10:36 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.627 14:10:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:08.627 14:10:36 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.627 14:10:36 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:08.627 14:10:36 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:08.892 ************************************ 00:06:08.892 END TEST rpc_integrity 00:06:08.892 ************************************ 00:06:08.892 14:10:36 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:08.892 00:06:08.892 real 0m0.335s 00:06:08.892 user 0m0.163s 00:06:08.892 sys 0m0.068s 00:06:08.892 14:10:36 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:08.892 14:10:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:08.892 14:10:36 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:08.892 14:10:36 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:08.892 14:10:36 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:08.892 14:10:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.892 ************************************ 00:06:08.892 START TEST rpc_plugins 00:06:08.892 ************************************ 00:06:08.892 14:10:36 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:06:08.892 14:10:36 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:08.892 14:10:36 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.892 14:10:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:08.892 14:10:36 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.892 14:10:36 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:08.892 14:10:36 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:08.892 14:10:36 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.892 14:10:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:08.892 14:10:36 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.892 14:10:36 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:08.892 { 00:06:08.892 "name": "Malloc1", 00:06:08.892 "aliases": [ 00:06:08.892 "cd086c3e-5568-470f-8f31-fcae1da620a2" 00:06:08.892 ], 00:06:08.892 "product_name": "Malloc disk", 00:06:08.892 "block_size": 4096, 00:06:08.892 "num_blocks": 256, 00:06:08.892 "uuid": "cd086c3e-5568-470f-8f31-fcae1da620a2", 00:06:08.892 "assigned_rate_limits": { 00:06:08.892 "rw_ios_per_sec": 0, 00:06:08.892 "rw_mbytes_per_sec": 0, 00:06:08.892 "r_mbytes_per_sec": 0, 00:06:08.892 "w_mbytes_per_sec": 0 00:06:08.892 }, 00:06:08.892 "claimed": false, 00:06:08.892 "zoned": false, 00:06:08.892 "supported_io_types": { 00:06:08.892 "read": true, 00:06:08.892 "write": true, 00:06:08.892 "unmap": true, 00:06:08.892 "flush": true, 00:06:08.892 "reset": true, 00:06:08.892 "nvme_admin": false, 00:06:08.892 "nvme_io": false, 00:06:08.892 "nvme_io_md": false, 00:06:08.892 "write_zeroes": true, 00:06:08.892 "zcopy": true, 00:06:08.892 "get_zone_info": false, 00:06:08.892 "zone_management": false, 00:06:08.892 "zone_append": false, 00:06:08.892 "compare": false, 00:06:08.892 "compare_and_write": false, 00:06:08.892 "abort": true, 00:06:08.892 "seek_hole": false, 00:06:08.892 "seek_data": false, 00:06:08.892 "copy": true, 00:06:08.892 "nvme_iov_md": false 00:06:08.892 }, 00:06:08.892 "memory_domains": [ 00:06:08.892 { 00:06:08.892 "dma_device_id": "system", 00:06:08.892 "dma_device_type": 1 00:06:08.892 }, 00:06:08.892 { 00:06:08.892 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:08.892 "dma_device_type": 2 00:06:08.892 } 00:06:08.892 ], 00:06:08.892 "driver_specific": {} 00:06:08.892 } 00:06:08.892 ]' 00:06:08.892 14:10:36 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:08.892 14:10:36 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:08.892 14:10:36 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:08.892 14:10:36 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.892 14:10:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:08.892 14:10:36 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.892 14:10:36 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:08.892 14:10:36 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.892 14:10:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:08.892 14:10:36 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.892 14:10:36 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:08.892 14:10:36 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:09.164 14:10:36 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:09.164 00:06:09.164 real 0m0.172s 00:06:09.164 user 0m0.093s 00:06:09.164 sys 0m0.030s 00:06:09.164 14:10:36 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:09.164 14:10:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:09.164 ************************************ 00:06:09.164 END TEST rpc_plugins 00:06:09.164 ************************************ 00:06:09.164 14:10:36 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:09.164 14:10:36 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:09.164 14:10:36 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:09.164 14:10:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.164 ************************************ 00:06:09.164 START TEST rpc_trace_cmd_test 00:06:09.164 ************************************ 00:06:09.164 14:10:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:06:09.164 14:10:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:09.164 14:10:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:09.164 14:10:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.164 14:10:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:09.164 14:10:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.164 14:10:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:09.164 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57549", 00:06:09.164 "tpoint_group_mask": "0x8", 00:06:09.164 "iscsi_conn": { 00:06:09.164 "mask": "0x2", 00:06:09.164 "tpoint_mask": "0x0" 00:06:09.164 }, 00:06:09.164 "scsi": { 00:06:09.164 "mask": "0x4", 00:06:09.164 "tpoint_mask": "0x0" 00:06:09.164 }, 00:06:09.164 "bdev": { 00:06:09.164 "mask": "0x8", 00:06:09.164 "tpoint_mask": "0xffffffffffffffff" 00:06:09.164 }, 00:06:09.164 "nvmf_rdma": { 00:06:09.164 "mask": "0x10", 00:06:09.164 "tpoint_mask": "0x0" 00:06:09.164 }, 00:06:09.164 "nvmf_tcp": { 00:06:09.164 "mask": "0x20", 00:06:09.164 "tpoint_mask": "0x0" 00:06:09.164 }, 00:06:09.164 "ftl": { 00:06:09.164 "mask": "0x40", 00:06:09.164 "tpoint_mask": "0x0" 00:06:09.164 }, 00:06:09.164 "blobfs": { 00:06:09.164 "mask": "0x80", 00:06:09.164 "tpoint_mask": "0x0" 00:06:09.164 }, 00:06:09.164 "dsa": { 00:06:09.164 "mask": "0x200", 00:06:09.164 "tpoint_mask": "0x0" 00:06:09.164 }, 00:06:09.164 "thread": { 00:06:09.164 "mask": "0x400", 00:06:09.164 "tpoint_mask": "0x0" 00:06:09.164 }, 00:06:09.164 "nvme_pcie": { 00:06:09.164 "mask": "0x800", 00:06:09.164 "tpoint_mask": "0x0" 00:06:09.164 }, 00:06:09.164 "iaa": { 00:06:09.164 "mask": "0x1000", 00:06:09.164 "tpoint_mask": "0x0" 00:06:09.164 }, 00:06:09.164 "nvme_tcp": { 00:06:09.164 "mask": "0x2000", 00:06:09.164 "tpoint_mask": "0x0" 00:06:09.164 }, 00:06:09.164 "bdev_nvme": { 00:06:09.164 "mask": "0x4000", 00:06:09.164 "tpoint_mask": "0x0" 00:06:09.164 }, 00:06:09.164 "sock": { 00:06:09.164 "mask": "0x8000", 00:06:09.164 "tpoint_mask": "0x0" 00:06:09.164 }, 00:06:09.164 "blob": { 00:06:09.164 "mask": "0x10000", 00:06:09.164 "tpoint_mask": "0x0" 00:06:09.164 }, 00:06:09.164 "bdev_raid": { 00:06:09.164 "mask": "0x20000", 00:06:09.164 "tpoint_mask": "0x0" 00:06:09.164 }, 00:06:09.164 "scheduler": { 00:06:09.164 "mask": "0x40000", 00:06:09.164 "tpoint_mask": "0x0" 00:06:09.164 } 00:06:09.164 }' 00:06:09.164 14:10:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:09.164 14:10:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:06:09.164 14:10:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:09.164 14:10:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:09.164 14:10:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:09.165 14:10:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:09.165 14:10:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:09.424 14:10:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:09.424 14:10:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:09.424 ************************************ 00:06:09.424 END TEST rpc_trace_cmd_test 00:06:09.424 ************************************ 00:06:09.424 14:10:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:09.424 00:06:09.424 real 0m0.256s 00:06:09.424 user 0m0.206s 00:06:09.424 sys 0m0.042s 00:06:09.424 14:10:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:09.424 14:10:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:09.424 14:10:36 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:09.424 14:10:36 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:09.424 14:10:36 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:09.424 14:10:36 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:09.424 14:10:36 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:09.424 14:10:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.424 ************************************ 00:06:09.424 START TEST rpc_daemon_integrity 00:06:09.424 ************************************ 00:06:09.424 14:10:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:06:09.424 14:10:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:09.424 14:10:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.424 14:10:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:09.424 14:10:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.424 14:10:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:09.424 14:10:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:09.424 14:10:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:09.424 14:10:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:09.424 14:10:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.424 14:10:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:09.424 14:10:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.424 14:10:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:09.424 14:10:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:09.424 14:10:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.424 14:10:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:09.424 14:10:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.424 14:10:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:09.424 { 00:06:09.424 "name": "Malloc2", 00:06:09.424 "aliases": [ 00:06:09.424 "5a5b20c1-590c-47de-bb9a-e544d1018d33" 00:06:09.424 ], 00:06:09.424 "product_name": "Malloc disk", 00:06:09.424 "block_size": 512, 00:06:09.424 "num_blocks": 16384, 00:06:09.424 "uuid": "5a5b20c1-590c-47de-bb9a-e544d1018d33", 00:06:09.424 "assigned_rate_limits": { 00:06:09.424 "rw_ios_per_sec": 0, 00:06:09.424 "rw_mbytes_per_sec": 0, 00:06:09.424 "r_mbytes_per_sec": 0, 00:06:09.424 "w_mbytes_per_sec": 0 00:06:09.424 }, 00:06:09.424 "claimed": false, 00:06:09.424 "zoned": false, 00:06:09.424 "supported_io_types": { 00:06:09.424 "read": true, 00:06:09.424 "write": true, 00:06:09.424 "unmap": true, 00:06:09.424 "flush": true, 00:06:09.424 "reset": true, 00:06:09.424 "nvme_admin": false, 00:06:09.424 "nvme_io": false, 00:06:09.424 "nvme_io_md": false, 00:06:09.424 "write_zeroes": true, 00:06:09.424 "zcopy": true, 00:06:09.424 "get_zone_info": false, 00:06:09.424 "zone_management": false, 00:06:09.424 "zone_append": false, 00:06:09.424 "compare": false, 00:06:09.424 "compare_and_write": false, 00:06:09.424 "abort": true, 00:06:09.424 "seek_hole": false, 00:06:09.424 "seek_data": false, 00:06:09.424 "copy": true, 00:06:09.424 "nvme_iov_md": false 00:06:09.424 }, 00:06:09.424 "memory_domains": [ 00:06:09.424 { 00:06:09.424 "dma_device_id": "system", 00:06:09.424 "dma_device_type": 1 00:06:09.424 }, 00:06:09.424 { 00:06:09.424 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:09.424 "dma_device_type": 2 00:06:09.424 } 00:06:09.424 ], 00:06:09.424 "driver_specific": {} 00:06:09.424 } 00:06:09.424 ]' 00:06:09.424 14:10:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:09.683 14:10:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:09.683 14:10:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:09.683 14:10:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.683 14:10:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:09.683 [2024-11-06 14:10:37.103786] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:09.683 [2024-11-06 14:10:37.103861] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:09.683 [2024-11-06 14:10:37.103899] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:06:09.683 [2024-11-06 14:10:37.103912] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:09.683 [2024-11-06 14:10:37.106854] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:09.683 [2024-11-06 14:10:37.106893] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:09.683 Passthru0 00:06:09.683 14:10:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.683 14:10:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:09.683 14:10:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.683 14:10:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:09.683 14:10:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.683 14:10:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:09.683 { 00:06:09.683 "name": "Malloc2", 00:06:09.683 "aliases": [ 00:06:09.683 "5a5b20c1-590c-47de-bb9a-e544d1018d33" 00:06:09.683 ], 00:06:09.683 "product_name": "Malloc disk", 00:06:09.683 "block_size": 512, 00:06:09.683 "num_blocks": 16384, 00:06:09.683 "uuid": "5a5b20c1-590c-47de-bb9a-e544d1018d33", 00:06:09.683 "assigned_rate_limits": { 00:06:09.683 "rw_ios_per_sec": 0, 00:06:09.683 "rw_mbytes_per_sec": 0, 00:06:09.683 "r_mbytes_per_sec": 0, 00:06:09.683 "w_mbytes_per_sec": 0 00:06:09.683 }, 00:06:09.683 "claimed": true, 00:06:09.683 "claim_type": "exclusive_write", 00:06:09.683 "zoned": false, 00:06:09.683 "supported_io_types": { 00:06:09.683 "read": true, 00:06:09.683 "write": true, 00:06:09.683 "unmap": true, 00:06:09.683 "flush": true, 00:06:09.683 "reset": true, 00:06:09.683 "nvme_admin": false, 00:06:09.683 "nvme_io": false, 00:06:09.683 "nvme_io_md": false, 00:06:09.683 "write_zeroes": true, 00:06:09.683 "zcopy": true, 00:06:09.683 "get_zone_info": false, 00:06:09.683 "zone_management": false, 00:06:09.683 "zone_append": false, 00:06:09.683 "compare": false, 00:06:09.683 "compare_and_write": false, 00:06:09.683 "abort": true, 00:06:09.683 "seek_hole": false, 00:06:09.683 "seek_data": false, 00:06:09.683 "copy": true, 00:06:09.683 "nvme_iov_md": false 00:06:09.683 }, 00:06:09.683 "memory_domains": [ 00:06:09.683 { 00:06:09.683 "dma_device_id": "system", 00:06:09.683 "dma_device_type": 1 00:06:09.683 }, 00:06:09.683 { 00:06:09.683 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:09.683 "dma_device_type": 2 00:06:09.683 } 00:06:09.683 ], 00:06:09.683 "driver_specific": {} 00:06:09.683 }, 00:06:09.683 { 00:06:09.683 "name": "Passthru0", 00:06:09.683 "aliases": [ 00:06:09.683 "ee5bb1e7-36f2-5d34-998d-2b48006ae81c" 00:06:09.683 ], 00:06:09.683 "product_name": "passthru", 00:06:09.683 "block_size": 512, 00:06:09.683 "num_blocks": 16384, 00:06:09.683 "uuid": "ee5bb1e7-36f2-5d34-998d-2b48006ae81c", 00:06:09.683 "assigned_rate_limits": { 00:06:09.683 "rw_ios_per_sec": 0, 00:06:09.683 "rw_mbytes_per_sec": 0, 00:06:09.683 "r_mbytes_per_sec": 0, 00:06:09.683 "w_mbytes_per_sec": 0 00:06:09.683 }, 00:06:09.683 "claimed": false, 00:06:09.683 "zoned": false, 00:06:09.683 "supported_io_types": { 00:06:09.683 "read": true, 00:06:09.683 "write": true, 00:06:09.683 "unmap": true, 00:06:09.683 "flush": true, 00:06:09.683 "reset": true, 00:06:09.683 "nvme_admin": false, 00:06:09.683 "nvme_io": false, 00:06:09.683 "nvme_io_md": false, 00:06:09.683 "write_zeroes": true, 00:06:09.683 "zcopy": true, 00:06:09.683 "get_zone_info": false, 00:06:09.683 "zone_management": false, 00:06:09.683 "zone_append": false, 00:06:09.683 "compare": false, 00:06:09.683 "compare_and_write": false, 00:06:09.683 "abort": true, 00:06:09.683 "seek_hole": false, 00:06:09.683 "seek_data": false, 00:06:09.683 "copy": true, 00:06:09.683 "nvme_iov_md": false 00:06:09.683 }, 00:06:09.683 "memory_domains": [ 00:06:09.683 { 00:06:09.683 "dma_device_id": "system", 00:06:09.683 "dma_device_type": 1 00:06:09.683 }, 00:06:09.683 { 00:06:09.683 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:09.683 "dma_device_type": 2 00:06:09.683 } 00:06:09.683 ], 00:06:09.683 "driver_specific": { 00:06:09.683 "passthru": { 00:06:09.683 "name": "Passthru0", 00:06:09.683 "base_bdev_name": "Malloc2" 00:06:09.683 } 00:06:09.683 } 00:06:09.683 } 00:06:09.683 ]' 00:06:09.683 14:10:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:09.683 14:10:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:09.683 14:10:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:09.683 14:10:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.683 14:10:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:09.683 14:10:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.683 14:10:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:09.683 14:10:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.683 14:10:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:09.683 14:10:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.683 14:10:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:09.683 14:10:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.683 14:10:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:09.683 14:10:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.683 14:10:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:09.683 14:10:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:09.683 ************************************ 00:06:09.683 END TEST rpc_daemon_integrity 00:06:09.684 ************************************ 00:06:09.684 14:10:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:09.684 00:06:09.684 real 0m0.366s 00:06:09.684 user 0m0.192s 00:06:09.684 sys 0m0.066s 00:06:09.684 14:10:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:09.684 14:10:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:09.942 14:10:37 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:09.942 14:10:37 rpc -- rpc/rpc.sh@84 -- # killprocess 57549 00:06:09.942 14:10:37 rpc -- common/autotest_common.sh@952 -- # '[' -z 57549 ']' 00:06:09.942 14:10:37 rpc -- common/autotest_common.sh@956 -- # kill -0 57549 00:06:09.942 14:10:37 rpc -- common/autotest_common.sh@957 -- # uname 00:06:09.942 14:10:37 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:09.942 14:10:37 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57549 00:06:09.942 14:10:37 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:09.942 killing process with pid 57549 00:06:09.942 14:10:37 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:09.942 14:10:37 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57549' 00:06:09.942 14:10:37 rpc -- common/autotest_common.sh@971 -- # kill 57549 00:06:09.942 14:10:37 rpc -- common/autotest_common.sh@976 -- # wait 57549 00:06:12.480 00:06:12.480 real 0m5.527s 00:06:12.480 user 0m5.951s 00:06:12.480 sys 0m1.097s 00:06:12.480 14:10:39 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:12.480 ************************************ 00:06:12.480 END TEST rpc 00:06:12.480 ************************************ 00:06:12.480 14:10:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.480 14:10:39 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:12.480 14:10:39 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:12.480 14:10:39 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:12.480 14:10:39 -- common/autotest_common.sh@10 -- # set +x 00:06:12.480 ************************************ 00:06:12.480 START TEST skip_rpc 00:06:12.480 ************************************ 00:06:12.480 14:10:39 skip_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:12.480 * Looking for test storage... 00:06:12.480 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:12.480 14:10:40 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:12.480 14:10:40 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:06:12.480 14:10:40 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:12.739 14:10:40 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:12.739 14:10:40 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:12.739 14:10:40 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:12.739 14:10:40 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:12.739 14:10:40 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:12.739 14:10:40 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:12.739 14:10:40 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:12.739 14:10:40 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:12.739 14:10:40 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:12.739 14:10:40 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:12.739 14:10:40 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:12.739 14:10:40 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:12.739 14:10:40 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:12.739 14:10:40 skip_rpc -- scripts/common.sh@345 -- # : 1 00:06:12.739 14:10:40 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:12.739 14:10:40 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:12.739 14:10:40 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:12.739 14:10:40 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:06:12.739 14:10:40 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:12.739 14:10:40 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:06:12.739 14:10:40 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:12.739 14:10:40 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:12.739 14:10:40 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:06:12.739 14:10:40 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:12.739 14:10:40 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:06:12.739 14:10:40 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:12.739 14:10:40 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:12.739 14:10:40 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:12.739 14:10:40 skip_rpc -- scripts/common.sh@368 -- # return 0 00:06:12.739 14:10:40 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:12.739 14:10:40 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:12.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.739 --rc genhtml_branch_coverage=1 00:06:12.739 --rc genhtml_function_coverage=1 00:06:12.739 --rc genhtml_legend=1 00:06:12.739 --rc geninfo_all_blocks=1 00:06:12.739 --rc geninfo_unexecuted_blocks=1 00:06:12.739 00:06:12.739 ' 00:06:12.739 14:10:40 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:12.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.739 --rc genhtml_branch_coverage=1 00:06:12.739 --rc genhtml_function_coverage=1 00:06:12.739 --rc genhtml_legend=1 00:06:12.739 --rc geninfo_all_blocks=1 00:06:12.739 --rc geninfo_unexecuted_blocks=1 00:06:12.739 00:06:12.739 ' 00:06:12.739 14:10:40 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:12.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.739 --rc genhtml_branch_coverage=1 00:06:12.739 --rc genhtml_function_coverage=1 00:06:12.739 --rc genhtml_legend=1 00:06:12.739 --rc geninfo_all_blocks=1 00:06:12.739 --rc geninfo_unexecuted_blocks=1 00:06:12.739 00:06:12.739 ' 00:06:12.739 14:10:40 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:12.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.739 --rc genhtml_branch_coverage=1 00:06:12.739 --rc genhtml_function_coverage=1 00:06:12.739 --rc genhtml_legend=1 00:06:12.739 --rc geninfo_all_blocks=1 00:06:12.739 --rc geninfo_unexecuted_blocks=1 00:06:12.739 00:06:12.739 ' 00:06:12.739 14:10:40 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:12.739 14:10:40 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:12.739 14:10:40 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:12.739 14:10:40 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:12.739 14:10:40 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:12.739 14:10:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.739 ************************************ 00:06:12.739 START TEST skip_rpc 00:06:12.739 ************************************ 00:06:12.739 14:10:40 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:06:12.739 14:10:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57778 00:06:12.739 14:10:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:12.739 14:10:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:12.739 14:10:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:12.739 [2024-11-06 14:10:40.316966] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:06:12.740 [2024-11-06 14:10:40.317262] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57778 ] 00:06:12.998 [2024-11-06 14:10:40.505684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.998 [2024-11-06 14:10:40.622435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.257 [2024-11-06 14:10:40.889414] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:18.547 14:10:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:18.547 14:10:45 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:18.547 14:10:45 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:18.547 14:10:45 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:18.547 14:10:45 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:18.547 14:10:45 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:18.547 14:10:45 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:18.547 14:10:45 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:06:18.547 14:10:45 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:18.547 14:10:45 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.547 14:10:45 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:18.547 14:10:45 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:18.547 14:10:45 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:18.547 14:10:45 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:18.547 14:10:45 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:18.547 14:10:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:18.547 14:10:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57778 00:06:18.547 14:10:45 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 57778 ']' 00:06:18.547 14:10:45 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 57778 00:06:18.547 14:10:45 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:06:18.547 14:10:45 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:18.547 14:10:45 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57778 00:06:18.547 killing process with pid 57778 00:06:18.547 14:10:45 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:18.547 14:10:45 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:18.547 14:10:45 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57778' 00:06:18.547 14:10:45 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 57778 00:06:18.547 14:10:45 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 57778 00:06:20.449 ************************************ 00:06:20.449 END TEST skip_rpc 00:06:20.449 ************************************ 00:06:20.449 00:06:20.449 real 0m7.483s 00:06:20.449 user 0m6.946s 00:06:20.449 sys 0m0.450s 00:06:20.449 14:10:47 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:20.449 14:10:47 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.449 14:10:47 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:20.449 14:10:47 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:20.449 14:10:47 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:20.449 14:10:47 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.449 ************************************ 00:06:20.449 START TEST skip_rpc_with_json 00:06:20.449 ************************************ 00:06:20.449 14:10:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:06:20.449 14:10:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:20.449 14:10:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57893 00:06:20.449 14:10:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:20.449 14:10:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:20.449 14:10:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57893 00:06:20.449 14:10:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 57893 ']' 00:06:20.449 14:10:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.449 14:10:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:20.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.449 14:10:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.449 14:10:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:20.449 14:10:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:20.449 [2024-11-06 14:10:47.868874] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:06:20.449 [2024-11-06 14:10:47.869019] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57893 ] 00:06:20.449 [2024-11-06 14:10:48.055082] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.708 [2024-11-06 14:10:48.169347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.967 [2024-11-06 14:10:48.427719] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:21.535 14:10:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:21.535 14:10:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:06:21.535 14:10:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:21.535 14:10:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.535 14:10:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:21.535 [2024-11-06 14:10:49.057488] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:21.535 request: 00:06:21.535 { 00:06:21.535 "trtype": "tcp", 00:06:21.535 "method": "nvmf_get_transports", 00:06:21.535 "req_id": 1 00:06:21.535 } 00:06:21.535 Got JSON-RPC error response 00:06:21.535 response: 00:06:21.536 { 00:06:21.536 "code": -19, 00:06:21.536 "message": "No such device" 00:06:21.536 } 00:06:21.536 14:10:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:21.536 14:10:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:21.536 14:10:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.536 14:10:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:21.536 [2024-11-06 14:10:49.073617] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:21.536 14:10:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.536 14:10:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:21.536 14:10:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.536 14:10:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:21.795 14:10:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.795 14:10:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:21.795 { 00:06:21.795 "subsystems": [ 00:06:21.795 { 00:06:21.795 "subsystem": "fsdev", 00:06:21.795 "config": [ 00:06:21.796 { 00:06:21.796 "method": "fsdev_set_opts", 00:06:21.796 "params": { 00:06:21.796 "fsdev_io_pool_size": 65535, 00:06:21.796 "fsdev_io_cache_size": 256 00:06:21.796 } 00:06:21.796 } 00:06:21.796 ] 00:06:21.796 }, 00:06:21.796 { 00:06:21.796 "subsystem": "vfio_user_target", 00:06:21.796 "config": null 00:06:21.796 }, 00:06:21.796 { 00:06:21.796 "subsystem": "keyring", 00:06:21.796 "config": [] 00:06:21.796 }, 00:06:21.796 { 00:06:21.796 "subsystem": "iobuf", 00:06:21.796 "config": [ 00:06:21.796 { 00:06:21.796 "method": "iobuf_set_options", 00:06:21.796 "params": { 00:06:21.796 "small_pool_count": 8192, 00:06:21.796 "large_pool_count": 1024, 00:06:21.796 "small_bufsize": 8192, 00:06:21.796 "large_bufsize": 135168, 00:06:21.796 "enable_numa": false 00:06:21.796 } 00:06:21.796 } 00:06:21.796 ] 00:06:21.796 }, 00:06:21.796 { 00:06:21.796 "subsystem": "sock", 00:06:21.796 "config": [ 00:06:21.796 { 00:06:21.796 "method": "sock_set_default_impl", 00:06:21.796 "params": { 00:06:21.796 "impl_name": "uring" 00:06:21.796 } 00:06:21.796 }, 00:06:21.796 { 00:06:21.796 "method": "sock_impl_set_options", 00:06:21.796 "params": { 00:06:21.796 "impl_name": "ssl", 00:06:21.796 "recv_buf_size": 4096, 00:06:21.796 "send_buf_size": 4096, 00:06:21.796 "enable_recv_pipe": true, 00:06:21.796 "enable_quickack": false, 00:06:21.796 "enable_placement_id": 0, 00:06:21.796 "enable_zerocopy_send_server": true, 00:06:21.796 "enable_zerocopy_send_client": false, 00:06:21.796 "zerocopy_threshold": 0, 00:06:21.796 "tls_version": 0, 00:06:21.796 "enable_ktls": false 00:06:21.796 } 00:06:21.796 }, 00:06:21.796 { 00:06:21.796 "method": "sock_impl_set_options", 00:06:21.796 "params": { 00:06:21.796 "impl_name": "posix", 00:06:21.796 "recv_buf_size": 2097152, 00:06:21.796 "send_buf_size": 2097152, 00:06:21.796 "enable_recv_pipe": true, 00:06:21.796 "enable_quickack": false, 00:06:21.796 "enable_placement_id": 0, 00:06:21.796 "enable_zerocopy_send_server": true, 00:06:21.796 "enable_zerocopy_send_client": false, 00:06:21.796 "zerocopy_threshold": 0, 00:06:21.796 "tls_version": 0, 00:06:21.796 "enable_ktls": false 00:06:21.796 } 00:06:21.796 }, 00:06:21.796 { 00:06:21.796 "method": "sock_impl_set_options", 00:06:21.796 "params": { 00:06:21.796 "impl_name": "uring", 00:06:21.796 "recv_buf_size": 2097152, 00:06:21.796 "send_buf_size": 2097152, 00:06:21.796 "enable_recv_pipe": true, 00:06:21.796 "enable_quickack": false, 00:06:21.796 "enable_placement_id": 0, 00:06:21.796 "enable_zerocopy_send_server": false, 00:06:21.796 "enable_zerocopy_send_client": false, 00:06:21.796 "zerocopy_threshold": 0, 00:06:21.796 "tls_version": 0, 00:06:21.796 "enable_ktls": false 00:06:21.796 } 00:06:21.796 } 00:06:21.796 ] 00:06:21.796 }, 00:06:21.796 { 00:06:21.796 "subsystem": "vmd", 00:06:21.796 "config": [] 00:06:21.796 }, 00:06:21.796 { 00:06:21.796 "subsystem": "accel", 00:06:21.796 "config": [ 00:06:21.796 { 00:06:21.796 "method": "accel_set_options", 00:06:21.796 "params": { 00:06:21.796 "small_cache_size": 128, 00:06:21.796 "large_cache_size": 16, 00:06:21.796 "task_count": 2048, 00:06:21.796 "sequence_count": 2048, 00:06:21.796 "buf_count": 2048 00:06:21.796 } 00:06:21.796 } 00:06:21.796 ] 00:06:21.796 }, 00:06:21.796 { 00:06:21.796 "subsystem": "bdev", 00:06:21.796 "config": [ 00:06:21.796 { 00:06:21.796 "method": "bdev_set_options", 00:06:21.796 "params": { 00:06:21.796 "bdev_io_pool_size": 65535, 00:06:21.796 "bdev_io_cache_size": 256, 00:06:21.796 "bdev_auto_examine": true, 00:06:21.796 "iobuf_small_cache_size": 128, 00:06:21.796 "iobuf_large_cache_size": 16 00:06:21.796 } 00:06:21.796 }, 00:06:21.796 { 00:06:21.796 "method": "bdev_raid_set_options", 00:06:21.796 "params": { 00:06:21.796 "process_window_size_kb": 1024, 00:06:21.796 "process_max_bandwidth_mb_sec": 0 00:06:21.796 } 00:06:21.796 }, 00:06:21.796 { 00:06:21.796 "method": "bdev_iscsi_set_options", 00:06:21.796 "params": { 00:06:21.796 "timeout_sec": 30 00:06:21.796 } 00:06:21.796 }, 00:06:21.796 { 00:06:21.796 "method": "bdev_nvme_set_options", 00:06:21.796 "params": { 00:06:21.796 "action_on_timeout": "none", 00:06:21.796 "timeout_us": 0, 00:06:21.796 "timeout_admin_us": 0, 00:06:21.796 "keep_alive_timeout_ms": 10000, 00:06:21.796 "arbitration_burst": 0, 00:06:21.796 "low_priority_weight": 0, 00:06:21.796 "medium_priority_weight": 0, 00:06:21.796 "high_priority_weight": 0, 00:06:21.796 "nvme_adminq_poll_period_us": 10000, 00:06:21.796 "nvme_ioq_poll_period_us": 0, 00:06:21.796 "io_queue_requests": 0, 00:06:21.796 "delay_cmd_submit": true, 00:06:21.796 "transport_retry_count": 4, 00:06:21.796 "bdev_retry_count": 3, 00:06:21.796 "transport_ack_timeout": 0, 00:06:21.796 "ctrlr_loss_timeout_sec": 0, 00:06:21.796 "reconnect_delay_sec": 0, 00:06:21.796 "fast_io_fail_timeout_sec": 0, 00:06:21.796 "disable_auto_failback": false, 00:06:21.796 "generate_uuids": false, 00:06:21.796 "transport_tos": 0, 00:06:21.796 "nvme_error_stat": false, 00:06:21.796 "rdma_srq_size": 0, 00:06:21.796 "io_path_stat": false, 00:06:21.796 "allow_accel_sequence": false, 00:06:21.796 "rdma_max_cq_size": 0, 00:06:21.796 "rdma_cm_event_timeout_ms": 0, 00:06:21.797 "dhchap_digests": [ 00:06:21.797 "sha256", 00:06:21.797 "sha384", 00:06:21.797 "sha512" 00:06:21.797 ], 00:06:21.797 "dhchap_dhgroups": [ 00:06:21.797 "null", 00:06:21.797 "ffdhe2048", 00:06:21.797 "ffdhe3072", 00:06:21.797 "ffdhe4096", 00:06:21.797 "ffdhe6144", 00:06:21.797 "ffdhe8192" 00:06:21.797 ] 00:06:21.797 } 00:06:21.797 }, 00:06:21.797 { 00:06:21.797 "method": "bdev_nvme_set_hotplug", 00:06:21.797 "params": { 00:06:21.797 "period_us": 100000, 00:06:21.797 "enable": false 00:06:21.797 } 00:06:21.797 }, 00:06:21.797 { 00:06:21.797 "method": "bdev_wait_for_examine" 00:06:21.797 } 00:06:21.797 ] 00:06:21.797 }, 00:06:21.797 { 00:06:21.797 "subsystem": "scsi", 00:06:21.797 "config": null 00:06:21.797 }, 00:06:21.797 { 00:06:21.797 "subsystem": "scheduler", 00:06:21.797 "config": [ 00:06:21.797 { 00:06:21.797 "method": "framework_set_scheduler", 00:06:21.797 "params": { 00:06:21.797 "name": "static" 00:06:21.797 } 00:06:21.797 } 00:06:21.797 ] 00:06:21.797 }, 00:06:21.797 { 00:06:21.797 "subsystem": "vhost_scsi", 00:06:21.797 "config": [] 00:06:21.797 }, 00:06:21.797 { 00:06:21.797 "subsystem": "vhost_blk", 00:06:21.797 "config": [] 00:06:21.797 }, 00:06:21.797 { 00:06:21.797 "subsystem": "ublk", 00:06:21.797 "config": [] 00:06:21.797 }, 00:06:21.797 { 00:06:21.797 "subsystem": "nbd", 00:06:21.797 "config": [] 00:06:21.797 }, 00:06:21.797 { 00:06:21.797 "subsystem": "nvmf", 00:06:21.797 "config": [ 00:06:21.797 { 00:06:21.797 "method": "nvmf_set_config", 00:06:21.797 "params": { 00:06:21.797 "discovery_filter": "match_any", 00:06:21.797 "admin_cmd_passthru": { 00:06:21.797 "identify_ctrlr": false 00:06:21.797 }, 00:06:21.797 "dhchap_digests": [ 00:06:21.797 "sha256", 00:06:21.797 "sha384", 00:06:21.797 "sha512" 00:06:21.797 ], 00:06:21.797 "dhchap_dhgroups": [ 00:06:21.797 "null", 00:06:21.797 "ffdhe2048", 00:06:21.797 "ffdhe3072", 00:06:21.797 "ffdhe4096", 00:06:21.797 "ffdhe6144", 00:06:21.797 "ffdhe8192" 00:06:21.797 ] 00:06:21.797 } 00:06:21.797 }, 00:06:21.797 { 00:06:21.797 "method": "nvmf_set_max_subsystems", 00:06:21.797 "params": { 00:06:21.797 "max_subsystems": 1024 00:06:21.797 } 00:06:21.797 }, 00:06:21.797 { 00:06:21.797 "method": "nvmf_set_crdt", 00:06:21.797 "params": { 00:06:21.797 "crdt1": 0, 00:06:21.797 "crdt2": 0, 00:06:21.797 "crdt3": 0 00:06:21.797 } 00:06:21.797 }, 00:06:21.797 { 00:06:21.797 "method": "nvmf_create_transport", 00:06:21.797 "params": { 00:06:21.797 "trtype": "TCP", 00:06:21.797 "max_queue_depth": 128, 00:06:21.797 "max_io_qpairs_per_ctrlr": 127, 00:06:21.797 "in_capsule_data_size": 4096, 00:06:21.797 "max_io_size": 131072, 00:06:21.797 "io_unit_size": 131072, 00:06:21.797 "max_aq_depth": 128, 00:06:21.797 "num_shared_buffers": 511, 00:06:21.797 "buf_cache_size": 4294967295, 00:06:21.797 "dif_insert_or_strip": false, 00:06:21.797 "zcopy": false, 00:06:21.797 "c2h_success": true, 00:06:21.797 "sock_priority": 0, 00:06:21.797 "abort_timeout_sec": 1, 00:06:21.797 "ack_timeout": 0, 00:06:21.797 "data_wr_pool_size": 0 00:06:21.797 } 00:06:21.797 } 00:06:21.797 ] 00:06:21.797 }, 00:06:21.797 { 00:06:21.797 "subsystem": "iscsi", 00:06:21.797 "config": [ 00:06:21.797 { 00:06:21.797 "method": "iscsi_set_options", 00:06:21.797 "params": { 00:06:21.797 "node_base": "iqn.2016-06.io.spdk", 00:06:21.797 "max_sessions": 128, 00:06:21.797 "max_connections_per_session": 2, 00:06:21.797 "max_queue_depth": 64, 00:06:21.797 "default_time2wait": 2, 00:06:21.797 "default_time2retain": 20, 00:06:21.797 "first_burst_length": 8192, 00:06:21.797 "immediate_data": true, 00:06:21.797 "allow_duplicated_isid": false, 00:06:21.797 "error_recovery_level": 0, 00:06:21.797 "nop_timeout": 60, 00:06:21.797 "nop_in_interval": 30, 00:06:21.797 "disable_chap": false, 00:06:21.797 "require_chap": false, 00:06:21.797 "mutual_chap": false, 00:06:21.797 "chap_group": 0, 00:06:21.797 "max_large_datain_per_connection": 64, 00:06:21.797 "max_r2t_per_connection": 4, 00:06:21.797 "pdu_pool_size": 36864, 00:06:21.797 "immediate_data_pool_size": 16384, 00:06:21.797 "data_out_pool_size": 2048 00:06:21.797 } 00:06:21.797 } 00:06:21.797 ] 00:06:21.797 } 00:06:21.797 ] 00:06:21.797 } 00:06:21.797 14:10:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:21.797 14:10:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57893 00:06:21.797 14:10:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 57893 ']' 00:06:21.797 14:10:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 57893 00:06:21.797 14:10:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:06:21.797 14:10:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:21.797 14:10:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57893 00:06:21.797 14:10:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:21.797 14:10:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:21.798 14:10:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57893' 00:06:21.798 killing process with pid 57893 00:06:21.798 14:10:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 57893 00:06:21.798 14:10:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 57893 00:06:24.328 14:10:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57938 00:06:24.328 14:10:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:24.328 14:10:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:29.599 14:10:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57938 00:06:29.599 14:10:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 57938 ']' 00:06:29.599 14:10:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 57938 00:06:29.599 14:10:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:06:29.599 14:10:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:29.599 14:10:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57938 00:06:29.599 killing process with pid 57938 00:06:29.599 14:10:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:29.599 14:10:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:29.599 14:10:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57938' 00:06:29.599 14:10:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 57938 00:06:29.599 14:10:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 57938 00:06:32.165 14:10:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:32.165 14:10:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:32.165 ************************************ 00:06:32.165 END TEST skip_rpc_with_json 00:06:32.165 ************************************ 00:06:32.165 00:06:32.165 real 0m11.503s 00:06:32.165 user 0m10.867s 00:06:32.165 sys 0m0.997s 00:06:32.165 14:10:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:32.165 14:10:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:32.165 14:10:59 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:32.165 14:10:59 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:32.165 14:10:59 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:32.165 14:10:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.165 ************************************ 00:06:32.165 START TEST skip_rpc_with_delay 00:06:32.165 ************************************ 00:06:32.165 14:10:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:06:32.165 14:10:59 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:32.165 14:10:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:06:32.165 14:10:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:32.165 14:10:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:32.165 14:10:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:32.165 14:10:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:32.165 14:10:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:32.165 14:10:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:32.165 14:10:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:32.165 14:10:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:32.165 14:10:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:32.165 14:10:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:32.165 [2024-11-06 14:10:59.450498] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:32.165 14:10:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:06:32.165 14:10:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:32.165 14:10:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:32.165 14:10:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:32.165 00:06:32.165 real 0m0.197s 00:06:32.165 user 0m0.093s 00:06:32.165 sys 0m0.104s 00:06:32.165 14:10:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:32.165 ************************************ 00:06:32.165 END TEST skip_rpc_with_delay 00:06:32.165 ************************************ 00:06:32.165 14:10:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:32.165 14:10:59 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:32.165 14:10:59 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:32.165 14:10:59 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:32.165 14:10:59 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:32.165 14:10:59 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:32.165 14:10:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.165 ************************************ 00:06:32.165 START TEST exit_on_failed_rpc_init 00:06:32.165 ************************************ 00:06:32.165 14:10:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:06:32.165 14:10:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=58077 00:06:32.165 14:10:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 58077 00:06:32.165 14:10:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:32.165 14:10:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 58077 ']' 00:06:32.165 14:10:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.165 14:10:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:32.165 14:10:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.165 14:10:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:32.165 14:10:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:32.165 [2024-11-06 14:10:59.722067] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:06:32.165 [2024-11-06 14:10:59.722415] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58077 ] 00:06:32.423 [2024-11-06 14:10:59.909223] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.423 [2024-11-06 14:11:00.029247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.681 [2024-11-06 14:11:00.292800] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:33.617 14:11:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:33.617 14:11:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:06:33.617 14:11:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:33.617 14:11:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:33.617 14:11:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:06:33.618 14:11:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:33.618 14:11:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:33.618 14:11:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:33.618 14:11:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:33.618 14:11:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:33.618 14:11:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:33.618 14:11:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:33.618 14:11:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:33.618 14:11:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:33.618 14:11:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:33.618 [2024-11-06 14:11:01.045489] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:06:33.618 [2024-11-06 14:11:01.045636] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58095 ] 00:06:33.618 [2024-11-06 14:11:01.230803] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.876 [2024-11-06 14:11:01.348599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:33.876 [2024-11-06 14:11:01.348884] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:33.876 [2024-11-06 14:11:01.348912] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:33.876 [2024-11-06 14:11:01.348933] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:34.135 14:11:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:06:34.135 14:11:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:34.135 14:11:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:06:34.135 14:11:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:06:34.135 14:11:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:06:34.135 14:11:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:34.135 14:11:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:34.135 14:11:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 58077 00:06:34.135 14:11:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 58077 ']' 00:06:34.135 14:11:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 58077 00:06:34.135 14:11:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:06:34.135 14:11:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:34.135 14:11:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58077 00:06:34.135 14:11:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:34.135 14:11:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:34.135 14:11:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58077' 00:06:34.135 killing process with pid 58077 00:06:34.135 14:11:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 58077 00:06:34.135 14:11:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 58077 00:06:36.685 00:06:36.685 real 0m4.511s 00:06:36.685 user 0m4.795s 00:06:36.685 sys 0m0.686s 00:06:36.685 14:11:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:36.685 ************************************ 00:06:36.685 END TEST exit_on_failed_rpc_init 00:06:36.685 ************************************ 00:06:36.685 14:11:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:36.685 14:11:04 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:36.685 00:06:36.685 real 0m24.222s 00:06:36.685 user 0m22.910s 00:06:36.685 sys 0m2.561s 00:06:36.685 ************************************ 00:06:36.685 END TEST skip_rpc 00:06:36.685 ************************************ 00:06:36.685 14:11:04 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:36.685 14:11:04 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:36.685 14:11:04 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:36.685 14:11:04 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:36.685 14:11:04 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:36.685 14:11:04 -- common/autotest_common.sh@10 -- # set +x 00:06:36.685 ************************************ 00:06:36.685 START TEST rpc_client 00:06:36.685 ************************************ 00:06:36.685 14:11:04 rpc_client -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:36.944 * Looking for test storage... 00:06:36.944 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:36.944 14:11:04 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:36.944 14:11:04 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:06:36.944 14:11:04 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:36.944 14:11:04 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:36.944 14:11:04 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:36.944 14:11:04 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:36.944 14:11:04 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:36.944 14:11:04 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:36.944 14:11:04 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:36.944 14:11:04 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:36.944 14:11:04 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:36.944 14:11:04 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:36.944 14:11:04 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:36.944 14:11:04 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:36.944 14:11:04 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:36.944 14:11:04 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:36.944 14:11:04 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:36.944 14:11:04 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:36.944 14:11:04 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:36.944 14:11:04 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:36.944 14:11:04 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:36.944 14:11:04 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:36.944 14:11:04 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:36.944 14:11:04 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:36.944 14:11:04 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:36.944 14:11:04 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:36.944 14:11:04 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:36.944 14:11:04 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:36.944 14:11:04 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:36.944 14:11:04 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:36.944 14:11:04 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:36.944 14:11:04 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:36.944 14:11:04 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:36.944 14:11:04 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:36.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.944 --rc genhtml_branch_coverage=1 00:06:36.944 --rc genhtml_function_coverage=1 00:06:36.944 --rc genhtml_legend=1 00:06:36.944 --rc geninfo_all_blocks=1 00:06:36.944 --rc geninfo_unexecuted_blocks=1 00:06:36.944 00:06:36.944 ' 00:06:36.944 14:11:04 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:36.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.944 --rc genhtml_branch_coverage=1 00:06:36.944 --rc genhtml_function_coverage=1 00:06:36.944 --rc genhtml_legend=1 00:06:36.944 --rc geninfo_all_blocks=1 00:06:36.944 --rc geninfo_unexecuted_blocks=1 00:06:36.944 00:06:36.944 ' 00:06:36.944 14:11:04 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:36.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.944 --rc genhtml_branch_coverage=1 00:06:36.944 --rc genhtml_function_coverage=1 00:06:36.944 --rc genhtml_legend=1 00:06:36.944 --rc geninfo_all_blocks=1 00:06:36.944 --rc geninfo_unexecuted_blocks=1 00:06:36.944 00:06:36.944 ' 00:06:36.944 14:11:04 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:36.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.944 --rc genhtml_branch_coverage=1 00:06:36.944 --rc genhtml_function_coverage=1 00:06:36.944 --rc genhtml_legend=1 00:06:36.944 --rc geninfo_all_blocks=1 00:06:36.944 --rc geninfo_unexecuted_blocks=1 00:06:36.944 00:06:36.944 ' 00:06:36.944 14:11:04 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:36.944 OK 00:06:36.944 14:11:04 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:36.944 00:06:36.944 real 0m0.319s 00:06:36.944 user 0m0.167s 00:06:36.944 sys 0m0.165s 00:06:36.944 14:11:04 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:36.945 14:11:04 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:36.945 ************************************ 00:06:36.945 END TEST rpc_client 00:06:36.945 ************************************ 00:06:37.204 14:11:04 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:37.204 14:11:04 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:37.204 14:11:04 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:37.204 14:11:04 -- common/autotest_common.sh@10 -- # set +x 00:06:37.204 ************************************ 00:06:37.204 START TEST json_config 00:06:37.204 ************************************ 00:06:37.204 14:11:04 json_config -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:37.204 14:11:04 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:37.204 14:11:04 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:06:37.204 14:11:04 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:37.204 14:11:04 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:37.204 14:11:04 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:37.204 14:11:04 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:37.204 14:11:04 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:37.204 14:11:04 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:37.204 14:11:04 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:37.204 14:11:04 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:37.204 14:11:04 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:37.204 14:11:04 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:37.204 14:11:04 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:37.204 14:11:04 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:37.204 14:11:04 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:37.204 14:11:04 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:37.204 14:11:04 json_config -- scripts/common.sh@345 -- # : 1 00:06:37.204 14:11:04 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:37.204 14:11:04 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:37.204 14:11:04 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:37.204 14:11:04 json_config -- scripts/common.sh@353 -- # local d=1 00:06:37.204 14:11:04 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:37.204 14:11:04 json_config -- scripts/common.sh@355 -- # echo 1 00:06:37.204 14:11:04 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:37.204 14:11:04 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:37.204 14:11:04 json_config -- scripts/common.sh@353 -- # local d=2 00:06:37.204 14:11:04 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:37.204 14:11:04 json_config -- scripts/common.sh@355 -- # echo 2 00:06:37.204 14:11:04 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:37.204 14:11:04 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:37.204 14:11:04 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:37.204 14:11:04 json_config -- scripts/common.sh@368 -- # return 0 00:06:37.204 14:11:04 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:37.204 14:11:04 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:37.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.204 --rc genhtml_branch_coverage=1 00:06:37.204 --rc genhtml_function_coverage=1 00:06:37.204 --rc genhtml_legend=1 00:06:37.204 --rc geninfo_all_blocks=1 00:06:37.204 --rc geninfo_unexecuted_blocks=1 00:06:37.204 00:06:37.204 ' 00:06:37.204 14:11:04 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:37.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.204 --rc genhtml_branch_coverage=1 00:06:37.204 --rc genhtml_function_coverage=1 00:06:37.204 --rc genhtml_legend=1 00:06:37.204 --rc geninfo_all_blocks=1 00:06:37.204 --rc geninfo_unexecuted_blocks=1 00:06:37.204 00:06:37.204 ' 00:06:37.204 14:11:04 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:37.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.204 --rc genhtml_branch_coverage=1 00:06:37.204 --rc genhtml_function_coverage=1 00:06:37.204 --rc genhtml_legend=1 00:06:37.204 --rc geninfo_all_blocks=1 00:06:37.204 --rc geninfo_unexecuted_blocks=1 00:06:37.204 00:06:37.204 ' 00:06:37.204 14:11:04 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:37.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.204 --rc genhtml_branch_coverage=1 00:06:37.204 --rc genhtml_function_coverage=1 00:06:37.204 --rc genhtml_legend=1 00:06:37.204 --rc geninfo_all_blocks=1 00:06:37.204 --rc geninfo_unexecuted_blocks=1 00:06:37.204 00:06:37.204 ' 00:06:37.204 14:11:04 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:37.204 14:11:04 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:37.204 14:11:04 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:37.204 14:11:04 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:37.204 14:11:04 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:37.204 14:11:04 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:37.204 14:11:04 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:37.204 14:11:04 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:37.204 14:11:04 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:37.204 14:11:04 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:37.204 14:11:04 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:37.204 14:11:04 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:37.463 14:11:04 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:06:37.463 14:11:04 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:06:37.463 14:11:04 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:37.463 14:11:04 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:37.463 14:11:04 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:37.463 14:11:04 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:37.463 14:11:04 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:37.463 14:11:04 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:37.463 14:11:04 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:37.463 14:11:04 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:37.463 14:11:04 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:37.463 14:11:04 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.463 14:11:04 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.463 14:11:04 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.463 14:11:04 json_config -- paths/export.sh@5 -- # export PATH 00:06:37.463 14:11:04 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.463 14:11:04 json_config -- nvmf/common.sh@51 -- # : 0 00:06:37.463 14:11:04 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:37.463 14:11:04 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:37.463 14:11:04 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:37.463 14:11:04 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:37.463 14:11:04 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:37.463 14:11:04 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:37.463 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:37.463 14:11:04 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:37.463 14:11:04 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:37.463 14:11:04 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:37.463 14:11:04 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:37.463 14:11:04 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:37.463 14:11:04 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:37.463 14:11:04 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:37.463 14:11:04 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:37.463 14:11:04 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:37.463 14:11:04 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:37.463 14:11:04 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:37.463 14:11:04 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:37.463 14:11:04 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:37.463 14:11:04 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:37.463 14:11:04 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:06:37.463 14:11:04 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:37.463 14:11:04 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:37.463 14:11:04 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:37.463 14:11:04 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:06:37.463 INFO: JSON configuration test init 00:06:37.463 14:11:04 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:06:37.463 14:11:04 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:06:37.463 14:11:04 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:37.464 14:11:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:37.464 14:11:04 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:06:37.464 14:11:04 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:37.464 14:11:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:37.464 Waiting for target to run... 00:06:37.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:37.464 14:11:04 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:06:37.464 14:11:04 json_config -- json_config/common.sh@9 -- # local app=target 00:06:37.464 14:11:04 json_config -- json_config/common.sh@10 -- # shift 00:06:37.464 14:11:04 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:37.464 14:11:04 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:37.464 14:11:04 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:37.464 14:11:04 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:37.464 14:11:04 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:37.464 14:11:04 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=58265 00:06:37.464 14:11:04 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:37.464 14:11:04 json_config -- json_config/common.sh@25 -- # waitforlisten 58265 /var/tmp/spdk_tgt.sock 00:06:37.464 14:11:04 json_config -- common/autotest_common.sh@833 -- # '[' -z 58265 ']' 00:06:37.464 14:11:04 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:37.464 14:11:04 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:37.464 14:11:04 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:37.464 14:11:04 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:37.464 14:11:04 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:37.464 14:11:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:37.464 [2024-11-06 14:11:05.011637] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:06:37.464 [2024-11-06 14:11:05.011999] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58265 ] 00:06:38.031 [2024-11-06 14:11:05.416867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.031 [2024-11-06 14:11:05.528443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.290 14:11:05 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:38.291 14:11:05 json_config -- common/autotest_common.sh@866 -- # return 0 00:06:38.291 14:11:05 json_config -- json_config/common.sh@26 -- # echo '' 00:06:38.291 00:06:38.291 14:11:05 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:06:38.291 14:11:05 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:06:38.291 14:11:05 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:38.291 14:11:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:38.291 14:11:05 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:06:38.291 14:11:05 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:06:38.291 14:11:05 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:38.291 14:11:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:38.291 14:11:05 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:38.291 14:11:05 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:06:38.291 14:11:05 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:38.859 [2024-11-06 14:11:06.348718] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:39.427 14:11:07 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:06:39.427 14:11:07 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:39.427 14:11:07 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:39.427 14:11:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:39.427 14:11:07 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:39.427 14:11:07 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:39.427 14:11:07 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:39.427 14:11:07 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:06:39.427 14:11:07 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:06:39.427 14:11:07 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:06:39.427 14:11:07 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:39.427 14:11:07 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:06:39.685 14:11:07 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:06:39.685 14:11:07 json_config -- json_config/json_config.sh@51 -- # local get_types 00:06:39.685 14:11:07 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:06:39.685 14:11:07 json_config -- json_config/json_config.sh@54 -- # sort 00:06:39.685 14:11:07 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:06:39.685 14:11:07 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:06:39.685 14:11:07 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:06:39.685 14:11:07 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:06:39.685 14:11:07 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:06:39.685 14:11:07 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:06:39.685 14:11:07 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:39.685 14:11:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:39.944 14:11:07 json_config -- json_config/json_config.sh@62 -- # return 0 00:06:39.944 14:11:07 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:06:39.944 14:11:07 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:06:39.944 14:11:07 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:06:39.944 14:11:07 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:06:39.944 14:11:07 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:06:39.944 14:11:07 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:06:39.944 14:11:07 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:39.944 14:11:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:39.944 14:11:07 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:39.944 14:11:07 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:06:39.944 14:11:07 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:06:39.944 14:11:07 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:39.944 14:11:07 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:39.944 MallocForNvmf0 00:06:39.944 14:11:07 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:39.944 14:11:07 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:40.210 MallocForNvmf1 00:06:40.210 14:11:07 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:40.210 14:11:07 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:40.498 [2024-11-06 14:11:08.022132] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:40.498 14:11:08 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:40.498 14:11:08 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:40.757 14:11:08 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:40.757 14:11:08 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:41.016 14:11:08 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:41.016 14:11:08 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:41.274 14:11:08 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:41.274 14:11:08 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:41.274 [2024-11-06 14:11:08.890534] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:41.533 14:11:08 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:06:41.533 14:11:08 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:41.533 14:11:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:41.533 14:11:08 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:06:41.533 14:11:08 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:41.533 14:11:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:41.533 14:11:09 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:06:41.533 14:11:09 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:41.533 14:11:09 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:41.792 MallocBdevForConfigChangeCheck 00:06:41.792 14:11:09 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:06:41.792 14:11:09 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:41.792 14:11:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:41.792 14:11:09 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:06:41.792 14:11:09 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:42.050 INFO: shutting down applications... 00:06:42.050 14:11:09 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:06:42.050 14:11:09 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:06:42.050 14:11:09 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:06:42.050 14:11:09 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:06:42.050 14:11:09 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:42.617 Calling clear_iscsi_subsystem 00:06:42.617 Calling clear_nvmf_subsystem 00:06:42.617 Calling clear_nbd_subsystem 00:06:42.617 Calling clear_ublk_subsystem 00:06:42.617 Calling clear_vhost_blk_subsystem 00:06:42.617 Calling clear_vhost_scsi_subsystem 00:06:42.617 Calling clear_bdev_subsystem 00:06:42.617 14:11:09 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:06:42.617 14:11:09 json_config -- json_config/json_config.sh@350 -- # count=100 00:06:42.617 14:11:09 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:06:42.617 14:11:10 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:42.617 14:11:10 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:42.617 14:11:10 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:06:42.876 14:11:10 json_config -- json_config/json_config.sh@352 -- # break 00:06:42.876 14:11:10 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:06:42.876 14:11:10 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:06:42.876 14:11:10 json_config -- json_config/common.sh@31 -- # local app=target 00:06:42.876 14:11:10 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:42.876 14:11:10 json_config -- json_config/common.sh@35 -- # [[ -n 58265 ]] 00:06:42.876 14:11:10 json_config -- json_config/common.sh@38 -- # kill -SIGINT 58265 00:06:42.876 14:11:10 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:42.876 14:11:10 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:42.876 14:11:10 json_config -- json_config/common.sh@41 -- # kill -0 58265 00:06:42.876 14:11:10 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:43.454 14:11:10 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:43.454 14:11:10 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:43.454 14:11:10 json_config -- json_config/common.sh@41 -- # kill -0 58265 00:06:43.454 14:11:10 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:44.023 14:11:11 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:44.023 14:11:11 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:44.023 14:11:11 json_config -- json_config/common.sh@41 -- # kill -0 58265 00:06:44.023 14:11:11 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:44.023 14:11:11 json_config -- json_config/common.sh@43 -- # break 00:06:44.023 14:11:11 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:44.023 14:11:11 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:44.023 SPDK target shutdown done 00:06:44.023 14:11:11 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:06:44.023 INFO: relaunching applications... 00:06:44.023 14:11:11 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:44.023 14:11:11 json_config -- json_config/common.sh@9 -- # local app=target 00:06:44.023 14:11:11 json_config -- json_config/common.sh@10 -- # shift 00:06:44.023 14:11:11 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:44.023 14:11:11 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:44.023 14:11:11 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:44.023 14:11:11 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:44.023 14:11:11 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:44.023 14:11:11 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=58468 00:06:44.023 14:11:11 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:44.023 14:11:11 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:44.023 Waiting for target to run... 00:06:44.023 14:11:11 json_config -- json_config/common.sh@25 -- # waitforlisten 58468 /var/tmp/spdk_tgt.sock 00:06:44.023 14:11:11 json_config -- common/autotest_common.sh@833 -- # '[' -z 58468 ']' 00:06:44.024 14:11:11 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:44.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:44.024 14:11:11 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:44.024 14:11:11 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:44.024 14:11:11 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:44.024 14:11:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:44.024 [2024-11-06 14:11:11.570809] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:06:44.024 [2024-11-06 14:11:11.571283] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58468 ] 00:06:44.590 [2024-11-06 14:11:12.002796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.590 [2024-11-06 14:11:12.115219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.850 [2024-11-06 14:11:12.465775] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:45.786 [2024-11-06 14:11:13.120807] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:45.786 [2024-11-06 14:11:13.152974] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:45.786 00:06:45.786 INFO: Checking if target configuration is the same... 00:06:45.786 14:11:13 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:45.786 14:11:13 json_config -- common/autotest_common.sh@866 -- # return 0 00:06:45.786 14:11:13 json_config -- json_config/common.sh@26 -- # echo '' 00:06:45.786 14:11:13 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:06:45.786 14:11:13 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:45.787 14:11:13 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:45.787 14:11:13 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:06:45.787 14:11:13 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:45.787 + '[' 2 -ne 2 ']' 00:06:45.787 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:06:45.787 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:06:45.787 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:45.787 +++ basename /dev/fd/62 00:06:45.787 ++ mktemp /tmp/62.XXX 00:06:45.787 + tmp_file_1=/tmp/62.e4c 00:06:45.787 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:45.787 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:45.787 + tmp_file_2=/tmp/spdk_tgt_config.json.KEi 00:06:45.787 + ret=0 00:06:45.787 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:46.045 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:46.046 + diff -u /tmp/62.e4c /tmp/spdk_tgt_config.json.KEi 00:06:46.046 INFO: JSON config files are the same 00:06:46.046 + echo 'INFO: JSON config files are the same' 00:06:46.046 + rm /tmp/62.e4c /tmp/spdk_tgt_config.json.KEi 00:06:46.046 + exit 0 00:06:46.046 INFO: changing configuration and checking if this can be detected... 00:06:46.046 14:11:13 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:06:46.046 14:11:13 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:46.046 14:11:13 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:46.046 14:11:13 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:46.305 14:11:13 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:46.305 14:11:13 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:06:46.305 14:11:13 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:46.305 + '[' 2 -ne 2 ']' 00:06:46.305 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:06:46.305 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:06:46.305 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:46.305 +++ basename /dev/fd/62 00:06:46.305 ++ mktemp /tmp/62.XXX 00:06:46.305 + tmp_file_1=/tmp/62.kE3 00:06:46.305 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:46.305 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:46.305 + tmp_file_2=/tmp/spdk_tgt_config.json.5aD 00:06:46.305 + ret=0 00:06:46.305 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:46.873 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:46.873 + diff -u /tmp/62.kE3 /tmp/spdk_tgt_config.json.5aD 00:06:46.873 + ret=1 00:06:46.873 + echo '=== Start of file: /tmp/62.kE3 ===' 00:06:46.873 + cat /tmp/62.kE3 00:06:46.873 + echo '=== End of file: /tmp/62.kE3 ===' 00:06:46.873 + echo '' 00:06:46.873 + echo '=== Start of file: /tmp/spdk_tgt_config.json.5aD ===' 00:06:46.873 + cat /tmp/spdk_tgt_config.json.5aD 00:06:46.873 + echo '=== End of file: /tmp/spdk_tgt_config.json.5aD ===' 00:06:46.873 + echo '' 00:06:46.873 + rm /tmp/62.kE3 /tmp/spdk_tgt_config.json.5aD 00:06:46.873 + exit 1 00:06:46.873 INFO: configuration change detected. 00:06:46.873 14:11:14 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:06:46.873 14:11:14 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:06:46.873 14:11:14 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:06:46.873 14:11:14 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:46.873 14:11:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:46.873 14:11:14 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:06:46.873 14:11:14 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:06:46.873 14:11:14 json_config -- json_config/json_config.sh@324 -- # [[ -n 58468 ]] 00:06:46.873 14:11:14 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:06:46.873 14:11:14 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:06:46.873 14:11:14 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:46.873 14:11:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:46.873 14:11:14 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:06:46.873 14:11:14 json_config -- json_config/json_config.sh@200 -- # uname -s 00:06:46.873 14:11:14 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:06:46.873 14:11:14 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:06:46.873 14:11:14 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:06:46.873 14:11:14 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:06:46.873 14:11:14 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:46.873 14:11:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:46.873 14:11:14 json_config -- json_config/json_config.sh@330 -- # killprocess 58468 00:06:46.873 14:11:14 json_config -- common/autotest_common.sh@952 -- # '[' -z 58468 ']' 00:06:46.873 14:11:14 json_config -- common/autotest_common.sh@956 -- # kill -0 58468 00:06:46.873 14:11:14 json_config -- common/autotest_common.sh@957 -- # uname 00:06:46.873 14:11:14 json_config -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:46.873 14:11:14 json_config -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58468 00:06:46.873 killing process with pid 58468 00:06:46.873 14:11:14 json_config -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:46.873 14:11:14 json_config -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:46.873 14:11:14 json_config -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58468' 00:06:46.873 14:11:14 json_config -- common/autotest_common.sh@971 -- # kill 58468 00:06:46.873 14:11:14 json_config -- common/autotest_common.sh@976 -- # wait 58468 00:06:48.250 14:11:15 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:48.250 14:11:15 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:06:48.250 14:11:15 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:48.250 14:11:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:48.250 14:11:15 json_config -- json_config/json_config.sh@335 -- # return 0 00:06:48.250 INFO: Success 00:06:48.250 14:11:15 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:06:48.250 ************************************ 00:06:48.250 END TEST json_config 00:06:48.250 ************************************ 00:06:48.250 00:06:48.250 real 0m10.904s 00:06:48.250 user 0m13.748s 00:06:48.250 sys 0m2.415s 00:06:48.250 14:11:15 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:48.250 14:11:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:48.250 14:11:15 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:48.250 14:11:15 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:48.250 14:11:15 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:48.250 14:11:15 -- common/autotest_common.sh@10 -- # set +x 00:06:48.250 ************************************ 00:06:48.250 START TEST json_config_extra_key 00:06:48.250 ************************************ 00:06:48.250 14:11:15 json_config_extra_key -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:48.250 14:11:15 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:48.250 14:11:15 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:06:48.250 14:11:15 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:48.250 14:11:15 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:48.250 14:11:15 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:48.250 14:11:15 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:48.250 14:11:15 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:48.250 14:11:15 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:48.250 14:11:15 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:48.250 14:11:15 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:48.250 14:11:15 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:48.250 14:11:15 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:48.250 14:11:15 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:48.250 14:11:15 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:48.250 14:11:15 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:48.250 14:11:15 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:48.250 14:11:15 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:48.250 14:11:15 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:48.250 14:11:15 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:48.250 14:11:15 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:48.250 14:11:15 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:48.250 14:11:15 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:48.250 14:11:15 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:48.250 14:11:15 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:48.250 14:11:15 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:48.250 14:11:15 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:48.250 14:11:15 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:48.250 14:11:15 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:48.250 14:11:15 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:48.250 14:11:15 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:48.250 14:11:15 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:48.250 14:11:15 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:48.250 14:11:15 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:48.250 14:11:15 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:48.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.250 --rc genhtml_branch_coverage=1 00:06:48.250 --rc genhtml_function_coverage=1 00:06:48.250 --rc genhtml_legend=1 00:06:48.250 --rc geninfo_all_blocks=1 00:06:48.250 --rc geninfo_unexecuted_blocks=1 00:06:48.250 00:06:48.250 ' 00:06:48.250 14:11:15 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:48.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.250 --rc genhtml_branch_coverage=1 00:06:48.250 --rc genhtml_function_coverage=1 00:06:48.250 --rc genhtml_legend=1 00:06:48.250 --rc geninfo_all_blocks=1 00:06:48.250 --rc geninfo_unexecuted_blocks=1 00:06:48.250 00:06:48.250 ' 00:06:48.250 14:11:15 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:48.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.250 --rc genhtml_branch_coverage=1 00:06:48.250 --rc genhtml_function_coverage=1 00:06:48.250 --rc genhtml_legend=1 00:06:48.250 --rc geninfo_all_blocks=1 00:06:48.250 --rc geninfo_unexecuted_blocks=1 00:06:48.250 00:06:48.250 ' 00:06:48.250 14:11:15 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:48.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.250 --rc genhtml_branch_coverage=1 00:06:48.250 --rc genhtml_function_coverage=1 00:06:48.250 --rc genhtml_legend=1 00:06:48.250 --rc geninfo_all_blocks=1 00:06:48.250 --rc geninfo_unexecuted_blocks=1 00:06:48.250 00:06:48.250 ' 00:06:48.250 14:11:15 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:48.250 14:11:15 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:48.250 14:11:15 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:48.250 14:11:15 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:48.250 14:11:15 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:48.250 14:11:15 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:48.250 14:11:15 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:48.250 14:11:15 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:48.251 14:11:15 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:48.251 14:11:15 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:48.251 14:11:15 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:48.251 14:11:15 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:48.251 14:11:15 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:06:48.251 14:11:15 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:06:48.251 14:11:15 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:48.251 14:11:15 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:48.251 14:11:15 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:48.251 14:11:15 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:48.251 14:11:15 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:48.251 14:11:15 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:48.251 14:11:15 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:48.251 14:11:15 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:48.251 14:11:15 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:48.251 14:11:15 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.251 14:11:15 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.251 14:11:15 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.251 14:11:15 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:48.251 14:11:15 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.251 14:11:15 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:48.251 14:11:15 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:48.251 14:11:15 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:48.251 14:11:15 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:48.251 14:11:15 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:48.251 14:11:15 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:48.251 14:11:15 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:48.251 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:48.251 14:11:15 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:48.251 14:11:15 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:48.251 14:11:15 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:48.251 14:11:15 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:48.251 14:11:15 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:48.251 14:11:15 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:48.251 14:11:15 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:48.251 14:11:15 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:48.251 14:11:15 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:48.251 14:11:15 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:48.251 14:11:15 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:48.251 14:11:15 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:48.251 14:11:15 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:48.251 14:11:15 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:48.251 INFO: launching applications... 00:06:48.251 14:11:15 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:48.251 14:11:15 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:48.251 14:11:15 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:48.251 14:11:15 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:48.251 14:11:15 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:48.251 14:11:15 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:48.251 14:11:15 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:48.251 14:11:15 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:48.251 14:11:15 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=58640 00:06:48.251 14:11:15 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:48.251 Waiting for target to run... 00:06:48.251 14:11:15 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 58640 /var/tmp/spdk_tgt.sock 00:06:48.251 14:11:15 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:48.251 14:11:15 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 58640 ']' 00:06:48.251 14:11:15 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:48.251 14:11:15 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:48.251 14:11:15 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:48.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:48.251 14:11:15 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:48.251 14:11:15 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:48.510 [2024-11-06 14:11:15.966631] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:06:48.511 [2024-11-06 14:11:15.966785] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58640 ] 00:06:48.768 [2024-11-06 14:11:16.372969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.065 [2024-11-06 14:11:16.486324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.335 [2024-11-06 14:11:16.709851] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:49.902 00:06:49.902 INFO: shutting down applications... 00:06:49.902 14:11:17 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:49.902 14:11:17 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:06:49.902 14:11:17 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:49.902 14:11:17 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:49.902 14:11:17 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:49.902 14:11:17 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:49.902 14:11:17 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:49.902 14:11:17 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 58640 ]] 00:06:49.902 14:11:17 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 58640 00:06:49.902 14:11:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:49.902 14:11:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:49.902 14:11:17 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58640 00:06:49.902 14:11:17 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:50.162 14:11:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:50.162 14:11:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:50.162 14:11:17 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58640 00:06:50.162 14:11:17 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:50.729 14:11:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:50.729 14:11:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:50.729 14:11:18 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58640 00:06:50.729 14:11:18 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:51.296 14:11:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:51.296 14:11:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:51.296 14:11:18 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58640 00:06:51.296 14:11:18 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:51.896 14:11:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:51.896 14:11:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:51.896 14:11:19 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58640 00:06:51.896 14:11:19 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:52.492 14:11:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:52.492 14:11:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:52.492 14:11:19 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58640 00:06:52.492 14:11:19 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:52.751 14:11:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:52.751 14:11:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:52.751 14:11:20 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58640 00:06:52.751 14:11:20 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:52.751 14:11:20 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:52.751 14:11:20 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:52.751 SPDK target shutdown done 00:06:52.751 14:11:20 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:52.751 Success 00:06:52.751 14:11:20 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:52.751 ************************************ 00:06:52.751 END TEST json_config_extra_key 00:06:52.751 ************************************ 00:06:52.751 00:06:52.751 real 0m4.725s 00:06:52.751 user 0m4.190s 00:06:52.751 sys 0m0.676s 00:06:52.751 14:11:20 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:52.751 14:11:20 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:53.010 14:11:20 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:53.010 14:11:20 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:53.010 14:11:20 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:53.010 14:11:20 -- common/autotest_common.sh@10 -- # set +x 00:06:53.010 ************************************ 00:06:53.010 START TEST alias_rpc 00:06:53.010 ************************************ 00:06:53.010 14:11:20 alias_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:53.010 * Looking for test storage... 00:06:53.010 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:53.010 14:11:20 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:53.010 14:11:20 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:53.010 14:11:20 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:06:53.010 14:11:20 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:53.010 14:11:20 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:53.010 14:11:20 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:53.010 14:11:20 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:53.010 14:11:20 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:53.010 14:11:20 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:53.010 14:11:20 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:53.010 14:11:20 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:53.010 14:11:20 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:53.011 14:11:20 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:53.011 14:11:20 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:53.011 14:11:20 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:53.011 14:11:20 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:53.011 14:11:20 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:53.011 14:11:20 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:53.011 14:11:20 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:53.011 14:11:20 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:53.011 14:11:20 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:53.011 14:11:20 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:53.011 14:11:20 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:53.011 14:11:20 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:53.011 14:11:20 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:53.011 14:11:20 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:53.011 14:11:20 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:53.011 14:11:20 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:53.011 14:11:20 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:53.011 14:11:20 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:53.011 14:11:20 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:53.011 14:11:20 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:53.011 14:11:20 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:53.011 14:11:20 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:53.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.011 --rc genhtml_branch_coverage=1 00:06:53.011 --rc genhtml_function_coverage=1 00:06:53.011 --rc genhtml_legend=1 00:06:53.011 --rc geninfo_all_blocks=1 00:06:53.011 --rc geninfo_unexecuted_blocks=1 00:06:53.011 00:06:53.011 ' 00:06:53.011 14:11:20 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:53.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.011 --rc genhtml_branch_coverage=1 00:06:53.011 --rc genhtml_function_coverage=1 00:06:53.011 --rc genhtml_legend=1 00:06:53.011 --rc geninfo_all_blocks=1 00:06:53.011 --rc geninfo_unexecuted_blocks=1 00:06:53.011 00:06:53.011 ' 00:06:53.011 14:11:20 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:53.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.011 --rc genhtml_branch_coverage=1 00:06:53.011 --rc genhtml_function_coverage=1 00:06:53.011 --rc genhtml_legend=1 00:06:53.011 --rc geninfo_all_blocks=1 00:06:53.011 --rc geninfo_unexecuted_blocks=1 00:06:53.011 00:06:53.011 ' 00:06:53.011 14:11:20 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:53.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.011 --rc genhtml_branch_coverage=1 00:06:53.011 --rc genhtml_function_coverage=1 00:06:53.011 --rc genhtml_legend=1 00:06:53.011 --rc geninfo_all_blocks=1 00:06:53.011 --rc geninfo_unexecuted_blocks=1 00:06:53.011 00:06:53.011 ' 00:06:53.011 14:11:20 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:53.011 14:11:20 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:53.011 14:11:20 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=58751 00:06:53.011 14:11:20 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 58751 00:06:53.011 14:11:20 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 58751 ']' 00:06:53.011 14:11:20 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.011 14:11:20 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:53.011 14:11:20 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.011 14:11:20 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:53.011 14:11:20 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.270 [2024-11-06 14:11:20.751018] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:06:53.270 [2024-11-06 14:11:20.751367] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58751 ] 00:06:53.529 [2024-11-06 14:11:20.934982] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.529 [2024-11-06 14:11:21.059635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.788 [2024-11-06 14:11:21.342819] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:54.724 14:11:21 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:54.724 14:11:21 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:54.724 14:11:21 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:54.724 14:11:22 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 58751 00:06:54.724 14:11:22 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 58751 ']' 00:06:54.724 14:11:22 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 58751 00:06:54.724 14:11:22 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:06:54.724 14:11:22 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:54.724 14:11:22 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58751 00:06:54.724 killing process with pid 58751 00:06:54.724 14:11:22 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:54.724 14:11:22 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:54.724 14:11:22 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58751' 00:06:54.724 14:11:22 alias_rpc -- common/autotest_common.sh@971 -- # kill 58751 00:06:54.724 14:11:22 alias_rpc -- common/autotest_common.sh@976 -- # wait 58751 00:06:57.272 ************************************ 00:06:57.272 END TEST alias_rpc 00:06:57.272 ************************************ 00:06:57.272 00:06:57.272 real 0m4.356s 00:06:57.272 user 0m4.294s 00:06:57.272 sys 0m0.665s 00:06:57.272 14:11:24 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:57.272 14:11:24 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.272 14:11:24 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:57.272 14:11:24 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:57.272 14:11:24 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:57.272 14:11:24 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:57.272 14:11:24 -- common/autotest_common.sh@10 -- # set +x 00:06:57.272 ************************************ 00:06:57.272 START TEST spdkcli_tcp 00:06:57.272 ************************************ 00:06:57.272 14:11:24 spdkcli_tcp -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:57.551 * Looking for test storage... 00:06:57.551 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:57.551 14:11:24 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:57.551 14:11:24 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:06:57.551 14:11:24 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:57.551 14:11:25 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:57.551 14:11:25 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:57.551 14:11:25 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:57.551 14:11:25 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:57.551 14:11:25 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:57.551 14:11:25 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:57.551 14:11:25 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:57.551 14:11:25 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:57.551 14:11:25 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:57.551 14:11:25 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:57.551 14:11:25 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:57.551 14:11:25 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:57.551 14:11:25 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:57.551 14:11:25 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:57.551 14:11:25 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:57.551 14:11:25 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:57.551 14:11:25 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:57.551 14:11:25 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:57.551 14:11:25 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:57.551 14:11:25 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:57.551 14:11:25 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:57.551 14:11:25 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:57.551 14:11:25 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:57.551 14:11:25 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:57.551 14:11:25 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:57.551 14:11:25 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:57.551 14:11:25 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:57.551 14:11:25 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:57.551 14:11:25 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:57.551 14:11:25 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:57.551 14:11:25 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:57.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.551 --rc genhtml_branch_coverage=1 00:06:57.551 --rc genhtml_function_coverage=1 00:06:57.551 --rc genhtml_legend=1 00:06:57.551 --rc geninfo_all_blocks=1 00:06:57.551 --rc geninfo_unexecuted_blocks=1 00:06:57.551 00:06:57.551 ' 00:06:57.551 14:11:25 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:57.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.551 --rc genhtml_branch_coverage=1 00:06:57.551 --rc genhtml_function_coverage=1 00:06:57.551 --rc genhtml_legend=1 00:06:57.551 --rc geninfo_all_blocks=1 00:06:57.551 --rc geninfo_unexecuted_blocks=1 00:06:57.551 00:06:57.551 ' 00:06:57.551 14:11:25 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:57.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.551 --rc genhtml_branch_coverage=1 00:06:57.551 --rc genhtml_function_coverage=1 00:06:57.551 --rc genhtml_legend=1 00:06:57.551 --rc geninfo_all_blocks=1 00:06:57.551 --rc geninfo_unexecuted_blocks=1 00:06:57.551 00:06:57.551 ' 00:06:57.551 14:11:25 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:57.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.551 --rc genhtml_branch_coverage=1 00:06:57.551 --rc genhtml_function_coverage=1 00:06:57.551 --rc genhtml_legend=1 00:06:57.551 --rc geninfo_all_blocks=1 00:06:57.551 --rc geninfo_unexecuted_blocks=1 00:06:57.551 00:06:57.551 ' 00:06:57.551 14:11:25 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:57.551 14:11:25 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:57.551 14:11:25 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:57.551 14:11:25 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:57.551 14:11:25 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:57.551 14:11:25 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:57.551 14:11:25 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:57.551 14:11:25 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:57.551 14:11:25 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:57.551 14:11:25 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58864 00:06:57.551 14:11:25 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:57.551 14:11:25 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58864 00:06:57.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.551 14:11:25 spdkcli_tcp -- common/autotest_common.sh@833 -- # '[' -z 58864 ']' 00:06:57.551 14:11:25 spdkcli_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.551 14:11:25 spdkcli_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:57.551 14:11:25 spdkcli_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.551 14:11:25 spdkcli_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:57.551 14:11:25 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:57.810 [2024-11-06 14:11:25.212317] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:06:57.810 [2024-11-06 14:11:25.212693] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58864 ] 00:06:57.810 [2024-11-06 14:11:25.399257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:58.069 [2024-11-06 14:11:25.533453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.070 [2024-11-06 14:11:25.533489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:58.329 [2024-11-06 14:11:25.799556] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:58.896 14:11:26 spdkcli_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:58.896 14:11:26 spdkcli_tcp -- common/autotest_common.sh@866 -- # return 0 00:06:58.896 14:11:26 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58881 00:06:58.896 14:11:26 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:58.896 14:11:26 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:59.155 [ 00:06:59.155 "bdev_malloc_delete", 00:06:59.155 "bdev_malloc_create", 00:06:59.155 "bdev_null_resize", 00:06:59.155 "bdev_null_delete", 00:06:59.155 "bdev_null_create", 00:06:59.155 "bdev_nvme_cuse_unregister", 00:06:59.155 "bdev_nvme_cuse_register", 00:06:59.155 "bdev_opal_new_user", 00:06:59.155 "bdev_opal_set_lock_state", 00:06:59.155 "bdev_opal_delete", 00:06:59.155 "bdev_opal_get_info", 00:06:59.155 "bdev_opal_create", 00:06:59.155 "bdev_nvme_opal_revert", 00:06:59.155 "bdev_nvme_opal_init", 00:06:59.155 "bdev_nvme_send_cmd", 00:06:59.155 "bdev_nvme_set_keys", 00:06:59.155 "bdev_nvme_get_path_iostat", 00:06:59.155 "bdev_nvme_get_mdns_discovery_info", 00:06:59.155 "bdev_nvme_stop_mdns_discovery", 00:06:59.155 "bdev_nvme_start_mdns_discovery", 00:06:59.155 "bdev_nvme_set_multipath_policy", 00:06:59.155 "bdev_nvme_set_preferred_path", 00:06:59.155 "bdev_nvme_get_io_paths", 00:06:59.155 "bdev_nvme_remove_error_injection", 00:06:59.155 "bdev_nvme_add_error_injection", 00:06:59.155 "bdev_nvme_get_discovery_info", 00:06:59.155 "bdev_nvme_stop_discovery", 00:06:59.155 "bdev_nvme_start_discovery", 00:06:59.155 "bdev_nvme_get_controller_health_info", 00:06:59.155 "bdev_nvme_disable_controller", 00:06:59.155 "bdev_nvme_enable_controller", 00:06:59.155 "bdev_nvme_reset_controller", 00:06:59.155 "bdev_nvme_get_transport_statistics", 00:06:59.155 "bdev_nvme_apply_firmware", 00:06:59.155 "bdev_nvme_detach_controller", 00:06:59.155 "bdev_nvme_get_controllers", 00:06:59.155 "bdev_nvme_attach_controller", 00:06:59.155 "bdev_nvme_set_hotplug", 00:06:59.155 "bdev_nvme_set_options", 00:06:59.155 "bdev_passthru_delete", 00:06:59.155 "bdev_passthru_create", 00:06:59.155 "bdev_lvol_set_parent_bdev", 00:06:59.155 "bdev_lvol_set_parent", 00:06:59.155 "bdev_lvol_check_shallow_copy", 00:06:59.155 "bdev_lvol_start_shallow_copy", 00:06:59.155 "bdev_lvol_grow_lvstore", 00:06:59.155 "bdev_lvol_get_lvols", 00:06:59.155 "bdev_lvol_get_lvstores", 00:06:59.155 "bdev_lvol_delete", 00:06:59.155 "bdev_lvol_set_read_only", 00:06:59.155 "bdev_lvol_resize", 00:06:59.155 "bdev_lvol_decouple_parent", 00:06:59.155 "bdev_lvol_inflate", 00:06:59.155 "bdev_lvol_rename", 00:06:59.155 "bdev_lvol_clone_bdev", 00:06:59.155 "bdev_lvol_clone", 00:06:59.155 "bdev_lvol_snapshot", 00:06:59.155 "bdev_lvol_create", 00:06:59.155 "bdev_lvol_delete_lvstore", 00:06:59.155 "bdev_lvol_rename_lvstore", 00:06:59.155 "bdev_lvol_create_lvstore", 00:06:59.155 "bdev_raid_set_options", 00:06:59.155 "bdev_raid_remove_base_bdev", 00:06:59.155 "bdev_raid_add_base_bdev", 00:06:59.155 "bdev_raid_delete", 00:06:59.155 "bdev_raid_create", 00:06:59.155 "bdev_raid_get_bdevs", 00:06:59.155 "bdev_error_inject_error", 00:06:59.155 "bdev_error_delete", 00:06:59.155 "bdev_error_create", 00:06:59.155 "bdev_split_delete", 00:06:59.155 "bdev_split_create", 00:06:59.155 "bdev_delay_delete", 00:06:59.155 "bdev_delay_create", 00:06:59.155 "bdev_delay_update_latency", 00:06:59.155 "bdev_zone_block_delete", 00:06:59.155 "bdev_zone_block_create", 00:06:59.155 "blobfs_create", 00:06:59.156 "blobfs_detect", 00:06:59.156 "blobfs_set_cache_size", 00:06:59.156 "bdev_aio_delete", 00:06:59.156 "bdev_aio_rescan", 00:06:59.156 "bdev_aio_create", 00:06:59.156 "bdev_ftl_set_property", 00:06:59.156 "bdev_ftl_get_properties", 00:06:59.156 "bdev_ftl_get_stats", 00:06:59.156 "bdev_ftl_unmap", 00:06:59.156 "bdev_ftl_unload", 00:06:59.156 "bdev_ftl_delete", 00:06:59.156 "bdev_ftl_load", 00:06:59.156 "bdev_ftl_create", 00:06:59.156 "bdev_virtio_attach_controller", 00:06:59.156 "bdev_virtio_scsi_get_devices", 00:06:59.156 "bdev_virtio_detach_controller", 00:06:59.156 "bdev_virtio_blk_set_hotplug", 00:06:59.156 "bdev_iscsi_delete", 00:06:59.156 "bdev_iscsi_create", 00:06:59.156 "bdev_iscsi_set_options", 00:06:59.156 "bdev_uring_delete", 00:06:59.156 "bdev_uring_rescan", 00:06:59.156 "bdev_uring_create", 00:06:59.156 "accel_error_inject_error", 00:06:59.156 "ioat_scan_accel_module", 00:06:59.156 "dsa_scan_accel_module", 00:06:59.156 "iaa_scan_accel_module", 00:06:59.156 "vfu_virtio_create_fs_endpoint", 00:06:59.156 "vfu_virtio_create_scsi_endpoint", 00:06:59.156 "vfu_virtio_scsi_remove_target", 00:06:59.156 "vfu_virtio_scsi_add_target", 00:06:59.156 "vfu_virtio_create_blk_endpoint", 00:06:59.156 "vfu_virtio_delete_endpoint", 00:06:59.156 "keyring_file_remove_key", 00:06:59.156 "keyring_file_add_key", 00:06:59.156 "keyring_linux_set_options", 00:06:59.156 "fsdev_aio_delete", 00:06:59.156 "fsdev_aio_create", 00:06:59.156 "iscsi_get_histogram", 00:06:59.156 "iscsi_enable_histogram", 00:06:59.156 "iscsi_set_options", 00:06:59.156 "iscsi_get_auth_groups", 00:06:59.156 "iscsi_auth_group_remove_secret", 00:06:59.156 "iscsi_auth_group_add_secret", 00:06:59.156 "iscsi_delete_auth_group", 00:06:59.156 "iscsi_create_auth_group", 00:06:59.156 "iscsi_set_discovery_auth", 00:06:59.156 "iscsi_get_options", 00:06:59.156 "iscsi_target_node_request_logout", 00:06:59.156 "iscsi_target_node_set_redirect", 00:06:59.156 "iscsi_target_node_set_auth", 00:06:59.156 "iscsi_target_node_add_lun", 00:06:59.156 "iscsi_get_stats", 00:06:59.156 "iscsi_get_connections", 00:06:59.156 "iscsi_portal_group_set_auth", 00:06:59.156 "iscsi_start_portal_group", 00:06:59.156 "iscsi_delete_portal_group", 00:06:59.156 "iscsi_create_portal_group", 00:06:59.156 "iscsi_get_portal_groups", 00:06:59.156 "iscsi_delete_target_node", 00:06:59.156 "iscsi_target_node_remove_pg_ig_maps", 00:06:59.156 "iscsi_target_node_add_pg_ig_maps", 00:06:59.156 "iscsi_create_target_node", 00:06:59.156 "iscsi_get_target_nodes", 00:06:59.156 "iscsi_delete_initiator_group", 00:06:59.156 "iscsi_initiator_group_remove_initiators", 00:06:59.156 "iscsi_initiator_group_add_initiators", 00:06:59.156 "iscsi_create_initiator_group", 00:06:59.156 "iscsi_get_initiator_groups", 00:06:59.156 "nvmf_set_crdt", 00:06:59.156 "nvmf_set_config", 00:06:59.156 "nvmf_set_max_subsystems", 00:06:59.156 "nvmf_stop_mdns_prr", 00:06:59.156 "nvmf_publish_mdns_prr", 00:06:59.156 "nvmf_subsystem_get_listeners", 00:06:59.156 "nvmf_subsystem_get_qpairs", 00:06:59.156 "nvmf_subsystem_get_controllers", 00:06:59.156 "nvmf_get_stats", 00:06:59.156 "nvmf_get_transports", 00:06:59.156 "nvmf_create_transport", 00:06:59.156 "nvmf_get_targets", 00:06:59.156 "nvmf_delete_target", 00:06:59.156 "nvmf_create_target", 00:06:59.156 "nvmf_subsystem_allow_any_host", 00:06:59.156 "nvmf_subsystem_set_keys", 00:06:59.156 "nvmf_subsystem_remove_host", 00:06:59.156 "nvmf_subsystem_add_host", 00:06:59.156 "nvmf_ns_remove_host", 00:06:59.156 "nvmf_ns_add_host", 00:06:59.156 "nvmf_subsystem_remove_ns", 00:06:59.156 "nvmf_subsystem_set_ns_ana_group", 00:06:59.156 "nvmf_subsystem_add_ns", 00:06:59.156 "nvmf_subsystem_listener_set_ana_state", 00:06:59.156 "nvmf_discovery_get_referrals", 00:06:59.156 "nvmf_discovery_remove_referral", 00:06:59.156 "nvmf_discovery_add_referral", 00:06:59.156 "nvmf_subsystem_remove_listener", 00:06:59.156 "nvmf_subsystem_add_listener", 00:06:59.156 "nvmf_delete_subsystem", 00:06:59.156 "nvmf_create_subsystem", 00:06:59.156 "nvmf_get_subsystems", 00:06:59.156 "env_dpdk_get_mem_stats", 00:06:59.156 "nbd_get_disks", 00:06:59.156 "nbd_stop_disk", 00:06:59.156 "nbd_start_disk", 00:06:59.156 "ublk_recover_disk", 00:06:59.156 "ublk_get_disks", 00:06:59.156 "ublk_stop_disk", 00:06:59.156 "ublk_start_disk", 00:06:59.156 "ublk_destroy_target", 00:06:59.156 "ublk_create_target", 00:06:59.156 "virtio_blk_create_transport", 00:06:59.156 "virtio_blk_get_transports", 00:06:59.156 "vhost_controller_set_coalescing", 00:06:59.156 "vhost_get_controllers", 00:06:59.156 "vhost_delete_controller", 00:06:59.156 "vhost_create_blk_controller", 00:06:59.156 "vhost_scsi_controller_remove_target", 00:06:59.156 "vhost_scsi_controller_add_target", 00:06:59.156 "vhost_start_scsi_controller", 00:06:59.156 "vhost_create_scsi_controller", 00:06:59.156 "thread_set_cpumask", 00:06:59.156 "scheduler_set_options", 00:06:59.156 "framework_get_governor", 00:06:59.156 "framework_get_scheduler", 00:06:59.156 "framework_set_scheduler", 00:06:59.156 "framework_get_reactors", 00:06:59.156 "thread_get_io_channels", 00:06:59.156 "thread_get_pollers", 00:06:59.156 "thread_get_stats", 00:06:59.156 "framework_monitor_context_switch", 00:06:59.156 "spdk_kill_instance", 00:06:59.156 "log_enable_timestamps", 00:06:59.156 "log_get_flags", 00:06:59.156 "log_clear_flag", 00:06:59.156 "log_set_flag", 00:06:59.156 "log_get_level", 00:06:59.156 "log_set_level", 00:06:59.156 "log_get_print_level", 00:06:59.156 "log_set_print_level", 00:06:59.156 "framework_enable_cpumask_locks", 00:06:59.156 "framework_disable_cpumask_locks", 00:06:59.156 "framework_wait_init", 00:06:59.156 "framework_start_init", 00:06:59.157 "scsi_get_devices", 00:06:59.157 "bdev_get_histogram", 00:06:59.157 "bdev_enable_histogram", 00:06:59.157 "bdev_set_qos_limit", 00:06:59.157 "bdev_set_qd_sampling_period", 00:06:59.157 "bdev_get_bdevs", 00:06:59.157 "bdev_reset_iostat", 00:06:59.157 "bdev_get_iostat", 00:06:59.157 "bdev_examine", 00:06:59.157 "bdev_wait_for_examine", 00:06:59.157 "bdev_set_options", 00:06:59.157 "accel_get_stats", 00:06:59.157 "accel_set_options", 00:06:59.157 "accel_set_driver", 00:06:59.157 "accel_crypto_key_destroy", 00:06:59.157 "accel_crypto_keys_get", 00:06:59.157 "accel_crypto_key_create", 00:06:59.157 "accel_assign_opc", 00:06:59.157 "accel_get_module_info", 00:06:59.157 "accel_get_opc_assignments", 00:06:59.157 "vmd_rescan", 00:06:59.157 "vmd_remove_device", 00:06:59.157 "vmd_enable", 00:06:59.157 "sock_get_default_impl", 00:06:59.157 "sock_set_default_impl", 00:06:59.157 "sock_impl_set_options", 00:06:59.157 "sock_impl_get_options", 00:06:59.157 "iobuf_get_stats", 00:06:59.157 "iobuf_set_options", 00:06:59.157 "keyring_get_keys", 00:06:59.157 "vfu_tgt_set_base_path", 00:06:59.157 "framework_get_pci_devices", 00:06:59.157 "framework_get_config", 00:06:59.157 "framework_get_subsystems", 00:06:59.157 "fsdev_set_opts", 00:06:59.157 "fsdev_get_opts", 00:06:59.157 "trace_get_info", 00:06:59.157 "trace_get_tpoint_group_mask", 00:06:59.157 "trace_disable_tpoint_group", 00:06:59.157 "trace_enable_tpoint_group", 00:06:59.157 "trace_clear_tpoint_mask", 00:06:59.157 "trace_set_tpoint_mask", 00:06:59.157 "notify_get_notifications", 00:06:59.157 "notify_get_types", 00:06:59.157 "spdk_get_version", 00:06:59.157 "rpc_get_methods" 00:06:59.157 ] 00:06:59.157 14:11:26 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:59.157 14:11:26 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:59.157 14:11:26 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:59.157 14:11:26 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:59.157 14:11:26 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58864 00:06:59.157 14:11:26 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' -z 58864 ']' 00:06:59.157 14:11:26 spdkcli_tcp -- common/autotest_common.sh@956 -- # kill -0 58864 00:06:59.157 14:11:26 spdkcli_tcp -- common/autotest_common.sh@957 -- # uname 00:06:59.157 14:11:26 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:59.157 14:11:26 spdkcli_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58864 00:06:59.416 14:11:26 spdkcli_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:59.416 killing process with pid 58864 00:06:59.416 14:11:26 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:59.416 14:11:26 spdkcli_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58864' 00:06:59.416 14:11:26 spdkcli_tcp -- common/autotest_common.sh@971 -- # kill 58864 00:06:59.416 14:11:26 spdkcli_tcp -- common/autotest_common.sh@976 -- # wait 58864 00:07:01.948 ************************************ 00:07:01.948 END TEST spdkcli_tcp 00:07:01.948 ************************************ 00:07:01.948 00:07:01.948 real 0m4.433s 00:07:01.948 user 0m7.850s 00:07:01.948 sys 0m0.761s 00:07:01.948 14:11:29 spdkcli_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:01.948 14:11:29 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:01.948 14:11:29 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:01.948 14:11:29 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:01.948 14:11:29 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:01.948 14:11:29 -- common/autotest_common.sh@10 -- # set +x 00:07:01.948 ************************************ 00:07:01.948 START TEST dpdk_mem_utility 00:07:01.948 ************************************ 00:07:01.948 14:11:29 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:01.948 * Looking for test storage... 00:07:01.948 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:07:01.948 14:11:29 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:01.948 14:11:29 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:07:01.948 14:11:29 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:01.948 14:11:29 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:01.948 14:11:29 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:01.948 14:11:29 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:01.948 14:11:29 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:01.948 14:11:29 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:07:01.948 14:11:29 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:07:01.948 14:11:29 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:07:01.948 14:11:29 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:07:01.948 14:11:29 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:07:01.948 14:11:29 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:07:01.948 14:11:29 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:07:01.948 14:11:29 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:01.948 14:11:29 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:07:01.948 14:11:29 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:07:01.948 14:11:29 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:01.949 14:11:29 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:01.949 14:11:29 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:07:01.949 14:11:29 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:07:01.949 14:11:29 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:01.949 14:11:29 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:07:01.949 14:11:29 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:07:01.949 14:11:29 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:07:01.949 14:11:29 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:07:01.949 14:11:29 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:01.949 14:11:29 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:07:01.949 14:11:29 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:07:01.949 14:11:29 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:01.949 14:11:29 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:01.949 14:11:29 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:07:01.949 14:11:29 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:01.949 14:11:29 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:01.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.949 --rc genhtml_branch_coverage=1 00:07:01.949 --rc genhtml_function_coverage=1 00:07:01.949 --rc genhtml_legend=1 00:07:01.949 --rc geninfo_all_blocks=1 00:07:01.949 --rc geninfo_unexecuted_blocks=1 00:07:01.949 00:07:01.949 ' 00:07:01.949 14:11:29 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:01.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.949 --rc genhtml_branch_coverage=1 00:07:01.949 --rc genhtml_function_coverage=1 00:07:01.949 --rc genhtml_legend=1 00:07:01.949 --rc geninfo_all_blocks=1 00:07:01.949 --rc geninfo_unexecuted_blocks=1 00:07:01.949 00:07:01.949 ' 00:07:01.949 14:11:29 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:01.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.949 --rc genhtml_branch_coverage=1 00:07:01.949 --rc genhtml_function_coverage=1 00:07:01.949 --rc genhtml_legend=1 00:07:01.949 --rc geninfo_all_blocks=1 00:07:01.949 --rc geninfo_unexecuted_blocks=1 00:07:01.949 00:07:01.949 ' 00:07:01.949 14:11:29 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:01.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.949 --rc genhtml_branch_coverage=1 00:07:01.949 --rc genhtml_function_coverage=1 00:07:01.949 --rc genhtml_legend=1 00:07:01.949 --rc geninfo_all_blocks=1 00:07:01.949 --rc geninfo_unexecuted_blocks=1 00:07:01.949 00:07:01.949 ' 00:07:01.949 14:11:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:07:01.949 14:11:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58986 00:07:01.949 14:11:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:01.949 14:11:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58986 00:07:01.949 14:11:29 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 58986 ']' 00:07:01.949 14:11:29 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.949 14:11:29 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:01.949 14:11:29 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.949 14:11:29 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:01.949 14:11:29 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:02.207 [2024-11-06 14:11:29.707776] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:07:02.207 [2024-11-06 14:11:29.708170] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58986 ] 00:07:02.465 [2024-11-06 14:11:29.882199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.465 [2024-11-06 14:11:30.034234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.722 [2024-11-06 14:11:30.299618] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:03.657 14:11:30 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:03.657 14:11:30 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:07:03.657 14:11:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:07:03.657 14:11:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:07:03.657 14:11:30 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.657 14:11:30 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:03.657 { 00:07:03.657 "filename": "/tmp/spdk_mem_dump.txt" 00:07:03.657 } 00:07:03.657 14:11:30 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.657 14:11:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:07:03.657 DPDK memory size 816.000000 MiB in 1 heap(s) 00:07:03.657 1 heaps totaling size 816.000000 MiB 00:07:03.657 size: 816.000000 MiB heap id: 0 00:07:03.657 end heaps---------- 00:07:03.657 9 mempools totaling size 595.772034 MiB 00:07:03.657 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:07:03.657 size: 158.602051 MiB name: PDU_data_out_Pool 00:07:03.657 size: 92.545471 MiB name: bdev_io_58986 00:07:03.657 size: 50.003479 MiB name: msgpool_58986 00:07:03.657 size: 36.509338 MiB name: fsdev_io_58986 00:07:03.657 size: 21.763794 MiB name: PDU_Pool 00:07:03.657 size: 19.513306 MiB name: SCSI_TASK_Pool 00:07:03.657 size: 4.133484 MiB name: evtpool_58986 00:07:03.657 size: 0.026123 MiB name: Session_Pool 00:07:03.657 end mempools------- 00:07:03.657 6 memzones totaling size 4.142822 MiB 00:07:03.657 size: 1.000366 MiB name: RG_ring_0_58986 00:07:03.657 size: 1.000366 MiB name: RG_ring_1_58986 00:07:03.657 size: 1.000366 MiB name: RG_ring_4_58986 00:07:03.657 size: 1.000366 MiB name: RG_ring_5_58986 00:07:03.657 size: 0.125366 MiB name: RG_ring_2_58986 00:07:03.657 size: 0.015991 MiB name: RG_ring_3_58986 00:07:03.657 end memzones------- 00:07:03.657 14:11:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:07:03.657 heap id: 0 total size: 816.000000 MiB number of busy elements: 324 number of free elements: 18 00:07:03.657 list of free elements. size: 16.789185 MiB 00:07:03.657 element at address: 0x200006400000 with size: 1.995972 MiB 00:07:03.657 element at address: 0x20000a600000 with size: 1.995972 MiB 00:07:03.657 element at address: 0x200003e00000 with size: 1.991028 MiB 00:07:03.657 element at address: 0x200018d00040 with size: 0.999939 MiB 00:07:03.657 element at address: 0x200019100040 with size: 0.999939 MiB 00:07:03.657 element at address: 0x200019200000 with size: 0.999084 MiB 00:07:03.657 element at address: 0x200031e00000 with size: 0.994324 MiB 00:07:03.657 element at address: 0x200000400000 with size: 0.992004 MiB 00:07:03.657 element at address: 0x200018a00000 with size: 0.959656 MiB 00:07:03.657 element at address: 0x200019500040 with size: 0.936401 MiB 00:07:03.657 element at address: 0x200000200000 with size: 0.716980 MiB 00:07:03.657 element at address: 0x20001ac00000 with size: 0.559753 MiB 00:07:03.657 element at address: 0x200000c00000 with size: 0.490173 MiB 00:07:03.657 element at address: 0x200018e00000 with size: 0.487976 MiB 00:07:03.657 element at address: 0x200019600000 with size: 0.485413 MiB 00:07:03.657 element at address: 0x200012c00000 with size: 0.443237 MiB 00:07:03.657 element at address: 0x200028000000 with size: 0.390442 MiB 00:07:03.657 element at address: 0x200000800000 with size: 0.350891 MiB 00:07:03.657 list of standard malloc elements. size: 199.289917 MiB 00:07:03.657 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:07:03.657 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:07:03.657 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:07:03.657 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:07:03.657 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:07:03.657 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:07:03.657 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:07:03.657 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:07:03.657 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:07:03.657 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:07:03.657 element at address: 0x200012bff040 with size: 0.000305 MiB 00:07:03.657 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:07:03.657 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:07:03.657 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:07:03.657 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:07:03.657 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:07:03.657 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:07:03.658 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:07:03.658 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:07:03.658 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:07:03.658 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:07:03.658 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:07:03.658 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:07:03.658 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:07:03.658 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:07:03.658 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:07:03.658 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:07:03.658 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:07:03.658 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:07:03.658 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:07:03.658 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:07:03.658 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:07:03.658 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:07:03.658 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:07:03.658 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:07:03.658 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:07:03.658 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:07:03.658 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:07:03.658 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:07:03.658 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:07:03.658 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:07:03.658 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:07:03.658 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:07:03.658 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:07:03.658 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:07:03.658 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:07:03.658 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:07:03.658 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:07:03.658 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:07:03.658 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:07:03.658 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:07:03.658 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:07:03.658 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:07:03.658 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:07:03.658 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:07:03.658 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:07:03.658 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:07:03.658 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:07:03.658 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:07:03.658 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:07:03.658 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:07:03.658 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:07:03.658 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:07:03.658 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:07:03.658 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:07:03.658 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:07:03.658 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:07:03.658 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:07:03.658 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:07:03.658 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:07:03.658 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:07:03.658 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:07:03.658 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:07:03.658 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:07:03.658 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:07:03.658 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:07:03.658 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:07:03.658 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:07:03.658 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:07:03.658 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:07:03.658 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:07:03.658 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:07:03.658 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:07:03.658 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:07:03.658 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:07:03.658 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:07:03.658 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:07:03.658 element at address: 0x200000cff000 with size: 0.000244 MiB 00:07:03.658 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:07:03.658 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:07:03.658 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:07:03.658 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:07:03.658 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:07:03.658 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:07:03.658 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:07:03.658 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:07:03.658 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:07:03.658 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:07:03.658 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:07:03.658 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:07:03.658 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:07:03.658 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:07:03.658 element at address: 0x200012bff180 with size: 0.000244 MiB 00:07:03.658 element at address: 0x200012bff280 with size: 0.000244 MiB 00:07:03.658 element at address: 0x200012bff380 with size: 0.000244 MiB 00:07:03.658 element at address: 0x200012bff480 with size: 0.000244 MiB 00:07:03.658 element at address: 0x200012bff580 with size: 0.000244 MiB 00:07:03.658 element at address: 0x200012bff680 with size: 0.000244 MiB 00:07:03.658 element at address: 0x200012bff780 with size: 0.000244 MiB 00:07:03.658 element at address: 0x200012bff880 with size: 0.000244 MiB 00:07:03.658 element at address: 0x200012bff980 with size: 0.000244 MiB 00:07:03.658 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:07:03.658 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:07:03.658 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:07:03.658 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:07:03.658 element at address: 0x200012c71780 with size: 0.000244 MiB 00:07:03.658 element at address: 0x200012c71880 with size: 0.000244 MiB 00:07:03.658 element at address: 0x200012c71980 with size: 0.000244 MiB 00:07:03.658 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:07:03.658 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:07:03.658 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:07:03.658 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:07:03.658 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:07:03.658 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:07:03.658 element at address: 0x200012c72080 with size: 0.000244 MiB 00:07:03.658 element at address: 0x200012c72180 with size: 0.000244 MiB 00:07:03.658 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:07:03.658 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:07:03.658 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:07:03.658 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:07:03.658 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:07:03.658 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:07:03.658 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:07:03.658 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:07:03.658 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:07:03.658 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:07:03.658 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:07:03.658 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:07:03.658 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:07:03.658 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:07:03.658 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:07:03.658 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:07:03.658 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:07:03.658 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:07:03.658 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:07:03.658 element at address: 0x20001ac8f4c0 with size: 0.000244 MiB 00:07:03.658 element at address: 0x20001ac8f5c0 with size: 0.000244 MiB 00:07:03.658 element at address: 0x20001ac8f6c0 with size: 0.000244 MiB 00:07:03.658 element at address: 0x20001ac8f7c0 with size: 0.000244 MiB 00:07:03.658 element at address: 0x20001ac8f8c0 with size: 0.000244 MiB 00:07:03.658 element at address: 0x20001ac8f9c0 with size: 0.000244 MiB 00:07:03.658 element at address: 0x20001ac8fac0 with size: 0.000244 MiB 00:07:03.658 element at address: 0x20001ac8fbc0 with size: 0.000244 MiB 00:07:03.658 element at address: 0x20001ac8fcc0 with size: 0.000244 MiB 00:07:03.658 element at address: 0x20001ac8fdc0 with size: 0.000244 MiB 00:07:03.658 element at address: 0x20001ac8fec0 with size: 0.000244 MiB 00:07:03.658 element at address: 0x20001ac8ffc0 with size: 0.000244 MiB 00:07:03.658 element at address: 0x20001ac900c0 with size: 0.000244 MiB 00:07:03.658 element at address: 0x20001ac901c0 with size: 0.000244 MiB 00:07:03.658 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:07:03.658 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:07:03.658 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:07:03.658 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:07:03.658 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:07:03.658 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:07:03.658 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:07:03.658 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:07:03.658 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:07:03.658 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:07:03.658 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:07:03.658 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:07:03.658 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:07:03.659 element at address: 0x200028063f40 with size: 0.000244 MiB 00:07:03.659 element at address: 0x200028064040 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20002806ad00 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20002806af80 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20002806b080 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20002806b180 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20002806b280 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20002806b380 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20002806b480 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20002806b580 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20002806b680 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20002806b780 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20002806b880 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20002806b980 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20002806be80 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20002806c080 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20002806c180 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20002806c280 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20002806c380 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20002806c480 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20002806c580 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20002806c680 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20002806c780 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20002806c880 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20002806c980 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20002806d080 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20002806d180 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20002806d280 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20002806d380 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20002806d480 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20002806d580 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20002806d680 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20002806d780 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20002806d880 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20002806d980 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20002806da80 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20002806db80 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20002806de80 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20002806df80 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20002806e080 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20002806e180 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20002806e280 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20002806e380 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20002806e480 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20002806e580 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20002806e680 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20002806e780 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20002806e880 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20002806e980 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20002806f080 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20002806f180 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20002806f280 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20002806f380 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20002806f480 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20002806f580 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20002806f680 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20002806f780 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20002806f880 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20002806f980 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:07:03.659 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:07:03.659 list of memzone associated elements. size: 599.920898 MiB 00:07:03.659 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:07:03.660 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:07:03.660 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:07:03.660 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:07:03.660 element at address: 0x200012df4740 with size: 92.045105 MiB 00:07:03.660 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_58986_0 00:07:03.660 element at address: 0x200000dff340 with size: 48.003113 MiB 00:07:03.660 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58986_0 00:07:03.660 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:07:03.660 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58986_0 00:07:03.660 element at address: 0x2000197be900 with size: 20.255615 MiB 00:07:03.660 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:07:03.660 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:07:03.660 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:07:03.660 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:07:03.660 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58986_0 00:07:03.660 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:07:03.660 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58986 00:07:03.660 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:07:03.660 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58986 00:07:03.660 element at address: 0x200018efde00 with size: 1.008179 MiB 00:07:03.660 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:07:03.660 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:07:03.660 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:07:03.660 element at address: 0x200018afde00 with size: 1.008179 MiB 00:07:03.660 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:07:03.660 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:07:03.660 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:07:03.660 element at address: 0x200000cff100 with size: 1.000549 MiB 00:07:03.660 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58986 00:07:03.660 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:07:03.660 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58986 00:07:03.660 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:07:03.660 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58986 00:07:03.660 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:07:03.660 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58986 00:07:03.660 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:07:03.660 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58986 00:07:03.660 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:07:03.660 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58986 00:07:03.660 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:07:03.660 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:07:03.660 element at address: 0x200012c72280 with size: 0.500549 MiB 00:07:03.660 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:07:03.660 element at address: 0x20001967c440 with size: 0.250549 MiB 00:07:03.660 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:07:03.660 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:07:03.660 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58986 00:07:03.660 element at address: 0x20000085df80 with size: 0.125549 MiB 00:07:03.660 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58986 00:07:03.660 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:07:03.660 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:07:03.660 element at address: 0x200028064140 with size: 0.023804 MiB 00:07:03.660 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:07:03.660 element at address: 0x200000859d40 with size: 0.016174 MiB 00:07:03.660 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58986 00:07:03.660 element at address: 0x20002806a2c0 with size: 0.002502 MiB 00:07:03.660 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:07:03.660 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:07:03.660 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58986 00:07:03.660 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:07:03.660 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58986 00:07:03.660 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:07:03.660 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58986 00:07:03.660 element at address: 0x20002806ae00 with size: 0.000366 MiB 00:07:03.660 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:07:03.660 14:11:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:07:03.660 14:11:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58986 00:07:03.660 14:11:31 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 58986 ']' 00:07:03.660 14:11:31 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 58986 00:07:03.660 14:11:31 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:07:03.660 14:11:31 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:03.660 14:11:31 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58986 00:07:03.660 killing process with pid 58986 00:07:03.660 14:11:31 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:03.660 14:11:31 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:03.660 14:11:31 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58986' 00:07:03.660 14:11:31 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 58986 00:07:03.660 14:11:31 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 58986 00:07:06.206 ************************************ 00:07:06.206 END TEST dpdk_mem_utility 00:07:06.206 ************************************ 00:07:06.206 00:07:06.206 real 0m4.232s 00:07:06.206 user 0m4.092s 00:07:06.206 sys 0m0.668s 00:07:06.206 14:11:33 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:06.206 14:11:33 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:06.206 14:11:33 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:06.206 14:11:33 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:06.206 14:11:33 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:06.206 14:11:33 -- common/autotest_common.sh@10 -- # set +x 00:07:06.206 ************************************ 00:07:06.206 START TEST event 00:07:06.206 ************************************ 00:07:06.206 14:11:33 event -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:06.206 * Looking for test storage... 00:07:06.206 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:06.206 14:11:33 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:06.206 14:11:33 event -- common/autotest_common.sh@1691 -- # lcov --version 00:07:06.206 14:11:33 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:06.465 14:11:33 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:06.465 14:11:33 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:06.465 14:11:33 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:06.465 14:11:33 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:06.465 14:11:33 event -- scripts/common.sh@336 -- # IFS=.-: 00:07:06.465 14:11:33 event -- scripts/common.sh@336 -- # read -ra ver1 00:07:06.465 14:11:33 event -- scripts/common.sh@337 -- # IFS=.-: 00:07:06.465 14:11:33 event -- scripts/common.sh@337 -- # read -ra ver2 00:07:06.465 14:11:33 event -- scripts/common.sh@338 -- # local 'op=<' 00:07:06.465 14:11:33 event -- scripts/common.sh@340 -- # ver1_l=2 00:07:06.465 14:11:33 event -- scripts/common.sh@341 -- # ver2_l=1 00:07:06.465 14:11:33 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:06.465 14:11:33 event -- scripts/common.sh@344 -- # case "$op" in 00:07:06.465 14:11:33 event -- scripts/common.sh@345 -- # : 1 00:07:06.465 14:11:33 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:06.465 14:11:33 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:06.465 14:11:33 event -- scripts/common.sh@365 -- # decimal 1 00:07:06.465 14:11:33 event -- scripts/common.sh@353 -- # local d=1 00:07:06.465 14:11:33 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:06.465 14:11:33 event -- scripts/common.sh@355 -- # echo 1 00:07:06.465 14:11:33 event -- scripts/common.sh@365 -- # ver1[v]=1 00:07:06.465 14:11:33 event -- scripts/common.sh@366 -- # decimal 2 00:07:06.465 14:11:33 event -- scripts/common.sh@353 -- # local d=2 00:07:06.465 14:11:33 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:06.465 14:11:33 event -- scripts/common.sh@355 -- # echo 2 00:07:06.465 14:11:33 event -- scripts/common.sh@366 -- # ver2[v]=2 00:07:06.465 14:11:33 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:06.465 14:11:33 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:06.465 14:11:33 event -- scripts/common.sh@368 -- # return 0 00:07:06.465 14:11:33 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:06.465 14:11:33 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:06.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.465 --rc genhtml_branch_coverage=1 00:07:06.465 --rc genhtml_function_coverage=1 00:07:06.465 --rc genhtml_legend=1 00:07:06.465 --rc geninfo_all_blocks=1 00:07:06.465 --rc geninfo_unexecuted_blocks=1 00:07:06.465 00:07:06.465 ' 00:07:06.465 14:11:33 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:06.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.465 --rc genhtml_branch_coverage=1 00:07:06.466 --rc genhtml_function_coverage=1 00:07:06.466 --rc genhtml_legend=1 00:07:06.466 --rc geninfo_all_blocks=1 00:07:06.466 --rc geninfo_unexecuted_blocks=1 00:07:06.466 00:07:06.466 ' 00:07:06.466 14:11:33 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:06.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.466 --rc genhtml_branch_coverage=1 00:07:06.466 --rc genhtml_function_coverage=1 00:07:06.466 --rc genhtml_legend=1 00:07:06.466 --rc geninfo_all_blocks=1 00:07:06.466 --rc geninfo_unexecuted_blocks=1 00:07:06.466 00:07:06.466 ' 00:07:06.466 14:11:33 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:06.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.466 --rc genhtml_branch_coverage=1 00:07:06.466 --rc genhtml_function_coverage=1 00:07:06.466 --rc genhtml_legend=1 00:07:06.466 --rc geninfo_all_blocks=1 00:07:06.466 --rc geninfo_unexecuted_blocks=1 00:07:06.466 00:07:06.466 ' 00:07:06.466 14:11:33 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:06.466 14:11:33 event -- bdev/nbd_common.sh@6 -- # set -e 00:07:06.466 14:11:33 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:06.466 14:11:33 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:07:06.466 14:11:33 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:06.466 14:11:33 event -- common/autotest_common.sh@10 -- # set +x 00:07:06.466 ************************************ 00:07:06.466 START TEST event_perf 00:07:06.466 ************************************ 00:07:06.466 14:11:33 event.event_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:06.466 Running I/O for 1 seconds...[2024-11-06 14:11:33.933393] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:07:06.466 [2024-11-06 14:11:33.933715] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59094 ] 00:07:06.724 [2024-11-06 14:11:34.136939] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:06.724 [2024-11-06 14:11:34.267014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:06.724 [2024-11-06 14:11:34.267142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:06.724 Running I/O for 1 seconds...[2024-11-06 14:11:34.267223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.724 [2024-11-06 14:11:34.267260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:08.102 00:07:08.102 lcore 0: 193013 00:07:08.102 lcore 1: 193011 00:07:08.102 lcore 2: 193010 00:07:08.102 lcore 3: 193010 00:07:08.102 done. 00:07:08.102 00:07:08.102 real 0m1.637s 00:07:08.102 user 0m4.378s 00:07:08.102 sys 0m0.133s 00:07:08.102 14:11:35 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:08.102 14:11:35 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:07:08.102 ************************************ 00:07:08.102 END TEST event_perf 00:07:08.102 ************************************ 00:07:08.102 14:11:35 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:08.102 14:11:35 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:07:08.102 14:11:35 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:08.102 14:11:35 event -- common/autotest_common.sh@10 -- # set +x 00:07:08.102 ************************************ 00:07:08.102 START TEST event_reactor 00:07:08.102 ************************************ 00:07:08.102 14:11:35 event.event_reactor -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:08.102 [2024-11-06 14:11:35.647143] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:07:08.102 [2024-11-06 14:11:35.647480] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59139 ] 00:07:08.361 [2024-11-06 14:11:35.829733] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.361 [2024-11-06 14:11:35.971582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.739 test_start 00:07:09.739 oneshot 00:07:09.739 tick 100 00:07:09.739 tick 100 00:07:09.739 tick 250 00:07:09.739 tick 100 00:07:09.739 tick 100 00:07:09.739 tick 100 00:07:09.739 tick 250 00:07:09.739 tick 500 00:07:09.739 tick 100 00:07:09.739 tick 100 00:07:09.739 tick 250 00:07:09.739 tick 100 00:07:09.739 tick 100 00:07:09.739 test_end 00:07:09.739 00:07:09.739 real 0m1.601s 00:07:09.739 user 0m1.372s 00:07:09.739 sys 0m0.120s 00:07:09.739 14:11:37 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:09.739 ************************************ 00:07:09.739 END TEST event_reactor 00:07:09.739 14:11:37 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:07:09.739 ************************************ 00:07:09.739 14:11:37 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:09.739 14:11:37 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:07:09.739 14:11:37 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:09.739 14:11:37 event -- common/autotest_common.sh@10 -- # set +x 00:07:09.739 ************************************ 00:07:09.739 START TEST event_reactor_perf 00:07:09.739 ************************************ 00:07:09.739 14:11:37 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:09.740 [2024-11-06 14:11:37.322994] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:07:09.740 [2024-11-06 14:11:37.323106] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59175 ] 00:07:10.006 [2024-11-06 14:11:37.504990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.006 [2024-11-06 14:11:37.621055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.384 test_start 00:07:11.384 test_end 00:07:11.384 Performance: 364624 events per second 00:07:11.384 00:07:11.384 real 0m1.572s 00:07:11.384 user 0m1.353s 00:07:11.384 sys 0m0.112s 00:07:11.384 14:11:38 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:11.384 14:11:38 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:07:11.384 ************************************ 00:07:11.384 END TEST event_reactor_perf 00:07:11.384 ************************************ 00:07:11.384 14:11:38 event -- event/event.sh@49 -- # uname -s 00:07:11.384 14:11:38 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:07:11.384 14:11:38 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:11.384 14:11:38 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:11.384 14:11:38 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:11.384 14:11:38 event -- common/autotest_common.sh@10 -- # set +x 00:07:11.384 ************************************ 00:07:11.384 START TEST event_scheduler 00:07:11.384 ************************************ 00:07:11.384 14:11:38 event.event_scheduler -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:11.643 * Looking for test storage... 00:07:11.643 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:07:11.643 14:11:39 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:11.643 14:11:39 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:07:11.643 14:11:39 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:11.643 14:11:39 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:11.643 14:11:39 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:11.643 14:11:39 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:11.643 14:11:39 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:11.643 14:11:39 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:07:11.643 14:11:39 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:07:11.643 14:11:39 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:07:11.643 14:11:39 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:07:11.643 14:11:39 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:07:11.643 14:11:39 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:07:11.643 14:11:39 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:07:11.643 14:11:39 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:11.643 14:11:39 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:07:11.643 14:11:39 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:07:11.643 14:11:39 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:11.643 14:11:39 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:11.643 14:11:39 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:07:11.643 14:11:39 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:07:11.643 14:11:39 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:11.643 14:11:39 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:07:11.643 14:11:39 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:07:11.643 14:11:39 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:07:11.643 14:11:39 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:07:11.643 14:11:39 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:11.643 14:11:39 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:07:11.643 14:11:39 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:07:11.643 14:11:39 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:11.643 14:11:39 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:11.643 14:11:39 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:07:11.643 14:11:39 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:11.643 14:11:39 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:11.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.643 --rc genhtml_branch_coverage=1 00:07:11.643 --rc genhtml_function_coverage=1 00:07:11.643 --rc genhtml_legend=1 00:07:11.643 --rc geninfo_all_blocks=1 00:07:11.643 --rc geninfo_unexecuted_blocks=1 00:07:11.643 00:07:11.643 ' 00:07:11.643 14:11:39 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:11.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.643 --rc genhtml_branch_coverage=1 00:07:11.643 --rc genhtml_function_coverage=1 00:07:11.643 --rc genhtml_legend=1 00:07:11.643 --rc geninfo_all_blocks=1 00:07:11.643 --rc geninfo_unexecuted_blocks=1 00:07:11.643 00:07:11.643 ' 00:07:11.643 14:11:39 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:11.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.643 --rc genhtml_branch_coverage=1 00:07:11.644 --rc genhtml_function_coverage=1 00:07:11.644 --rc genhtml_legend=1 00:07:11.644 --rc geninfo_all_blocks=1 00:07:11.644 --rc geninfo_unexecuted_blocks=1 00:07:11.644 00:07:11.644 ' 00:07:11.644 14:11:39 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:11.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.644 --rc genhtml_branch_coverage=1 00:07:11.644 --rc genhtml_function_coverage=1 00:07:11.644 --rc genhtml_legend=1 00:07:11.644 --rc geninfo_all_blocks=1 00:07:11.644 --rc geninfo_unexecuted_blocks=1 00:07:11.644 00:07:11.644 ' 00:07:11.644 14:11:39 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:07:11.644 14:11:39 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=59246 00:07:11.644 14:11:39 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:07:11.644 14:11:39 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:07:11.644 14:11:39 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 59246 00:07:11.644 14:11:39 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 59246 ']' 00:07:11.644 14:11:39 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.644 14:11:39 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:11.644 14:11:39 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.644 14:11:39 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:11.644 14:11:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:11.644 [2024-11-06 14:11:39.262085] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:07:11.644 [2024-11-06 14:11:39.262430] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59246 ] 00:07:11.903 [2024-11-06 14:11:39.448250] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:12.161 [2024-11-06 14:11:39.575076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.161 [2024-11-06 14:11:39.575248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:12.161 [2024-11-06 14:11:39.575502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:12.161 [2024-11-06 14:11:39.575280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:12.728 14:11:40 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:12.728 14:11:40 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:07:12.728 14:11:40 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:07:12.728 14:11:40 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.728 14:11:40 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:12.728 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:12.728 POWER: Cannot set governor of lcore 0 to userspace 00:07:12.728 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:12.728 POWER: Cannot set governor of lcore 0 to performance 00:07:12.728 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:12.728 POWER: Cannot set governor of lcore 0 to userspace 00:07:12.728 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:12.728 POWER: Cannot set governor of lcore 0 to userspace 00:07:12.728 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:07:12.728 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:07:12.728 POWER: Unable to set Power Management Environment for lcore 0 00:07:12.728 [2024-11-06 14:11:40.097058] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:07:12.728 [2024-11-06 14:11:40.097085] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:07:12.728 [2024-11-06 14:11:40.097099] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:07:12.728 [2024-11-06 14:11:40.097130] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:07:12.728 [2024-11-06 14:11:40.097142] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:07:12.728 [2024-11-06 14:11:40.097156] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:07:12.728 14:11:40 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.728 14:11:40 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:07:12.728 14:11:40 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.728 14:11:40 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:12.728 [2024-11-06 14:11:40.306238] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:12.986 [2024-11-06 14:11:40.424075] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:12.986 14:11:40 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.986 14:11:40 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:12.986 14:11:40 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:12.986 14:11:40 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:12.986 14:11:40 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:12.986 ************************************ 00:07:12.986 START TEST scheduler_create_thread 00:07:12.986 ************************************ 00:07:12.986 14:11:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:07:12.986 14:11:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:12.986 14:11:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.986 14:11:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:12.986 2 00:07:12.986 14:11:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.986 14:11:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:12.986 14:11:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.986 14:11:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:12.987 3 00:07:12.987 14:11:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.987 14:11:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:12.987 14:11:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.987 14:11:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:12.987 4 00:07:12.987 14:11:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.987 14:11:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:12.987 14:11:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.987 14:11:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:12.987 5 00:07:12.987 14:11:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.987 14:11:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:12.987 14:11:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.987 14:11:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:12.987 6 00:07:12.987 14:11:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.987 14:11:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:12.987 14:11:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.987 14:11:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:12.987 7 00:07:12.987 14:11:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.987 14:11:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:12.987 14:11:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.987 14:11:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:12.987 8 00:07:12.987 14:11:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.987 14:11:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:12.987 14:11:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.987 14:11:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:12.987 9 00:07:12.987 14:11:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.987 14:11:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:12.987 14:11:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.987 14:11:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:12.987 10 00:07:12.987 14:11:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.987 14:11:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:12.987 14:11:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.987 14:11:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:14.362 14:11:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.362 14:11:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:14.362 14:11:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:14.362 14:11:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.362 14:11:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:15.298 14:11:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.298 14:11:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:15.298 14:11:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.298 14:11:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:16.269 14:11:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.269 14:11:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:16.269 14:11:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:16.269 14:11:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.269 14:11:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:16.836 ************************************ 00:07:16.836 END TEST scheduler_create_thread 00:07:16.836 ************************************ 00:07:16.836 14:11:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.836 00:07:16.836 real 0m3.884s 00:07:16.836 user 0m0.027s 00:07:16.836 sys 0m0.009s 00:07:16.836 14:11:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:16.836 14:11:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:16.836 14:11:44 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:16.836 14:11:44 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 59246 00:07:16.836 14:11:44 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 59246 ']' 00:07:16.836 14:11:44 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 59246 00:07:16.836 14:11:44 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:07:16.836 14:11:44 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:16.836 14:11:44 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59246 00:07:16.836 killing process with pid 59246 00:07:16.837 14:11:44 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:07:16.837 14:11:44 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:07:16.837 14:11:44 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59246' 00:07:16.837 14:11:44 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 59246 00:07:16.837 14:11:44 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 59246 00:07:17.096 [2024-11-06 14:11:44.703227] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:18.473 00:07:18.473 real 0m6.930s 00:07:18.473 user 0m14.140s 00:07:18.473 sys 0m0.562s 00:07:18.473 14:11:45 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:18.473 14:11:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:18.473 ************************************ 00:07:18.473 END TEST event_scheduler 00:07:18.473 ************************************ 00:07:18.473 14:11:45 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:18.473 14:11:45 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:18.473 14:11:45 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:18.473 14:11:45 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:18.473 14:11:45 event -- common/autotest_common.sh@10 -- # set +x 00:07:18.473 ************************************ 00:07:18.473 START TEST app_repeat 00:07:18.473 ************************************ 00:07:18.473 14:11:45 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:07:18.473 14:11:45 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:18.473 14:11:45 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:18.473 14:11:45 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:18.473 14:11:45 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:18.473 14:11:45 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:18.473 14:11:45 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:18.473 14:11:45 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:18.473 Process app_repeat pid: 59376 00:07:18.473 14:11:45 event.app_repeat -- event/event.sh@19 -- # repeat_pid=59376 00:07:18.473 14:11:45 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:18.473 14:11:45 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:18.473 14:11:45 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 59376' 00:07:18.473 14:11:45 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:18.473 spdk_app_start Round 0 00:07:18.473 14:11:45 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:18.473 14:11:45 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59376 /var/tmp/spdk-nbd.sock 00:07:18.473 14:11:45 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 59376 ']' 00:07:18.473 14:11:45 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:18.473 14:11:45 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:18.473 14:11:45 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:18.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:18.473 14:11:45 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:18.473 14:11:45 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:18.473 [2024-11-06 14:11:45.998283] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:07:18.473 [2024-11-06 14:11:45.998441] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59376 ] 00:07:18.733 [2024-11-06 14:11:46.187264] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:18.733 [2024-11-06 14:11:46.312302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.733 [2024-11-06 14:11:46.312335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:18.991 [2024-11-06 14:11:46.518301] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:19.249 14:11:46 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:19.249 14:11:46 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:07:19.249 14:11:46 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:19.816 Malloc0 00:07:19.816 14:11:47 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:19.816 Malloc1 00:07:20.075 14:11:47 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:20.075 14:11:47 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:20.075 14:11:47 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:20.075 14:11:47 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:20.075 14:11:47 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:20.075 14:11:47 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:20.075 14:11:47 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:20.075 14:11:47 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:20.075 14:11:47 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:20.075 14:11:47 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:20.075 14:11:47 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:20.075 14:11:47 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:20.075 14:11:47 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:20.075 14:11:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:20.075 14:11:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:20.075 14:11:47 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:20.075 /dev/nbd0 00:07:20.075 14:11:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:20.075 14:11:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:20.075 14:11:47 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:07:20.075 14:11:47 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:07:20.075 14:11:47 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:20.075 14:11:47 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:20.075 14:11:47 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:07:20.075 14:11:47 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:07:20.075 14:11:47 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:20.075 14:11:47 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:20.075 14:11:47 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:20.075 1+0 records in 00:07:20.075 1+0 records out 00:07:20.075 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000243461 s, 16.8 MB/s 00:07:20.075 14:11:47 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:20.075 14:11:47 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:07:20.075 14:11:47 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:20.333 14:11:47 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:20.333 14:11:47 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:07:20.333 14:11:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:20.333 14:11:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:20.333 14:11:47 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:20.333 /dev/nbd1 00:07:20.333 14:11:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:20.333 14:11:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:20.333 14:11:47 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:07:20.333 14:11:47 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:07:20.333 14:11:47 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:20.333 14:11:47 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:20.333 14:11:47 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:07:20.591 14:11:47 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:07:20.591 14:11:47 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:20.591 14:11:47 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:20.591 14:11:47 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:20.591 1+0 records in 00:07:20.591 1+0 records out 00:07:20.591 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000471567 s, 8.7 MB/s 00:07:20.591 14:11:47 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:20.591 14:11:47 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:07:20.591 14:11:47 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:20.591 14:11:47 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:20.591 14:11:47 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:07:20.591 14:11:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:20.591 14:11:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:20.591 14:11:47 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:20.591 14:11:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:20.591 14:11:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:20.850 14:11:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:20.850 { 00:07:20.850 "nbd_device": "/dev/nbd0", 00:07:20.850 "bdev_name": "Malloc0" 00:07:20.850 }, 00:07:20.850 { 00:07:20.850 "nbd_device": "/dev/nbd1", 00:07:20.850 "bdev_name": "Malloc1" 00:07:20.850 } 00:07:20.850 ]' 00:07:20.850 14:11:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:20.850 { 00:07:20.850 "nbd_device": "/dev/nbd0", 00:07:20.850 "bdev_name": "Malloc0" 00:07:20.850 }, 00:07:20.850 { 00:07:20.850 "nbd_device": "/dev/nbd1", 00:07:20.850 "bdev_name": "Malloc1" 00:07:20.850 } 00:07:20.850 ]' 00:07:20.850 14:11:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:20.850 14:11:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:20.850 /dev/nbd1' 00:07:20.850 14:11:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:20.850 /dev/nbd1' 00:07:20.850 14:11:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:20.850 14:11:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:20.850 14:11:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:20.850 14:11:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:20.850 14:11:48 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:20.850 14:11:48 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:20.850 14:11:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:20.850 14:11:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:20.850 14:11:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:20.850 14:11:48 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:20.850 14:11:48 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:20.850 14:11:48 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:20.850 256+0 records in 00:07:20.850 256+0 records out 00:07:20.850 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0136613 s, 76.8 MB/s 00:07:20.850 14:11:48 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:20.850 14:11:48 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:20.850 256+0 records in 00:07:20.850 256+0 records out 00:07:20.850 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.030235 s, 34.7 MB/s 00:07:20.850 14:11:48 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:20.850 14:11:48 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:20.850 256+0 records in 00:07:20.850 256+0 records out 00:07:20.850 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0270449 s, 38.8 MB/s 00:07:20.850 14:11:48 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:20.850 14:11:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:20.850 14:11:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:20.850 14:11:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:20.850 14:11:48 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:20.850 14:11:48 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:20.850 14:11:48 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:20.850 14:11:48 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:20.850 14:11:48 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:20.850 14:11:48 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:20.850 14:11:48 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:20.850 14:11:48 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:20.850 14:11:48 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:20.850 14:11:48 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:20.850 14:11:48 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:20.850 14:11:48 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:20.850 14:11:48 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:20.850 14:11:48 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:20.850 14:11:48 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:21.108 14:11:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:21.108 14:11:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:21.108 14:11:48 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:21.108 14:11:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:21.108 14:11:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:21.108 14:11:48 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:21.108 14:11:48 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:21.108 14:11:48 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:21.108 14:11:48 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:21.108 14:11:48 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:21.368 14:11:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:21.368 14:11:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:21.368 14:11:48 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:21.368 14:11:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:21.368 14:11:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:21.368 14:11:48 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:21.368 14:11:48 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:21.368 14:11:48 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:21.368 14:11:48 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:21.368 14:11:48 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:21.368 14:11:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:21.627 14:11:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:21.627 14:11:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:21.627 14:11:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:21.627 14:11:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:21.627 14:11:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:21.627 14:11:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:21.627 14:11:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:21.627 14:11:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:21.627 14:11:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:21.627 14:11:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:21.627 14:11:49 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:21.627 14:11:49 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:21.627 14:11:49 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:22.193 14:11:49 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:23.568 [2024-11-06 14:11:50.836362] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:23.568 [2024-11-06 14:11:50.953757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.568 [2024-11-06 14:11:50.953760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:23.568 [2024-11-06 14:11:51.151030] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:23.568 [2024-11-06 14:11:51.151197] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:23.568 [2024-11-06 14:11:51.151222] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:25.501 spdk_app_start Round 1 00:07:25.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:25.501 14:11:52 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:25.501 14:11:52 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:25.501 14:11:52 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59376 /var/tmp/spdk-nbd.sock 00:07:25.501 14:11:52 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 59376 ']' 00:07:25.501 14:11:52 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:25.501 14:11:52 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:25.501 14:11:52 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:25.501 14:11:52 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:25.501 14:11:52 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:25.501 14:11:52 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:25.501 14:11:52 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:07:25.501 14:11:52 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:25.759 Malloc0 00:07:25.759 14:11:53 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:26.017 Malloc1 00:07:26.017 14:11:53 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:26.017 14:11:53 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:26.017 14:11:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:26.017 14:11:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:26.017 14:11:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:26.017 14:11:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:26.017 14:11:53 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:26.017 14:11:53 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:26.017 14:11:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:26.017 14:11:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:26.017 14:11:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:26.017 14:11:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:26.017 14:11:53 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:26.017 14:11:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:26.017 14:11:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:26.017 14:11:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:26.275 /dev/nbd0 00:07:26.275 14:11:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:26.275 14:11:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:26.275 14:11:53 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:07:26.275 14:11:53 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:07:26.275 14:11:53 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:26.275 14:11:53 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:26.275 14:11:53 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:07:26.275 14:11:53 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:07:26.275 14:11:53 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:26.275 14:11:53 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:26.275 14:11:53 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:26.275 1+0 records in 00:07:26.275 1+0 records out 00:07:26.275 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000345206 s, 11.9 MB/s 00:07:26.275 14:11:53 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:26.275 14:11:53 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:07:26.275 14:11:53 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:26.275 14:11:53 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:26.275 14:11:53 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:07:26.275 14:11:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:26.275 14:11:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:26.275 14:11:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:26.533 /dev/nbd1 00:07:26.533 14:11:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:26.533 14:11:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:26.533 14:11:54 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:07:26.533 14:11:54 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:07:26.533 14:11:54 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:26.533 14:11:54 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:26.533 14:11:54 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:07:26.533 14:11:54 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:07:26.533 14:11:54 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:26.533 14:11:54 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:26.533 14:11:54 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:26.792 1+0 records in 00:07:26.792 1+0 records out 00:07:26.792 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000269243 s, 15.2 MB/s 00:07:26.792 14:11:54 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:26.792 14:11:54 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:07:26.792 14:11:54 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:26.792 14:11:54 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:26.792 14:11:54 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:07:26.792 14:11:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:26.792 14:11:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:26.792 14:11:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:26.792 14:11:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:26.792 14:11:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:26.792 14:11:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:26.792 { 00:07:26.792 "nbd_device": "/dev/nbd0", 00:07:26.792 "bdev_name": "Malloc0" 00:07:26.792 }, 00:07:26.792 { 00:07:26.792 "nbd_device": "/dev/nbd1", 00:07:26.792 "bdev_name": "Malloc1" 00:07:26.792 } 00:07:26.792 ]' 00:07:26.792 14:11:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:26.792 { 00:07:26.792 "nbd_device": "/dev/nbd0", 00:07:26.792 "bdev_name": "Malloc0" 00:07:26.792 }, 00:07:26.792 { 00:07:26.792 "nbd_device": "/dev/nbd1", 00:07:26.792 "bdev_name": "Malloc1" 00:07:26.792 } 00:07:26.792 ]' 00:07:26.792 14:11:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:27.059 14:11:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:27.059 /dev/nbd1' 00:07:27.059 14:11:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:27.059 /dev/nbd1' 00:07:27.059 14:11:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:27.059 14:11:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:27.059 14:11:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:27.059 14:11:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:27.059 14:11:54 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:27.059 14:11:54 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:27.059 14:11:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:27.059 14:11:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:27.059 14:11:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:27.059 14:11:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:27.059 14:11:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:27.059 14:11:54 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:27.059 256+0 records in 00:07:27.059 256+0 records out 00:07:27.059 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0130357 s, 80.4 MB/s 00:07:27.059 14:11:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:27.059 14:11:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:27.059 256+0 records in 00:07:27.059 256+0 records out 00:07:27.059 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0309064 s, 33.9 MB/s 00:07:27.059 14:11:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:27.059 14:11:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:27.059 256+0 records in 00:07:27.059 256+0 records out 00:07:27.059 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.034043 s, 30.8 MB/s 00:07:27.059 14:11:54 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:27.059 14:11:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:27.059 14:11:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:27.059 14:11:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:27.059 14:11:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:27.059 14:11:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:27.059 14:11:54 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:27.059 14:11:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:27.059 14:11:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:27.059 14:11:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:27.059 14:11:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:27.059 14:11:54 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:27.059 14:11:54 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:27.059 14:11:54 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:27.059 14:11:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:27.059 14:11:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:27.059 14:11:54 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:27.059 14:11:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:27.059 14:11:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:27.323 14:11:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:27.323 14:11:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:27.323 14:11:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:27.323 14:11:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:27.323 14:11:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:27.323 14:11:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:27.323 14:11:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:27.323 14:11:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:27.323 14:11:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:27.323 14:11:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:27.581 14:11:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:27.582 14:11:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:27.582 14:11:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:27.582 14:11:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:27.582 14:11:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:27.582 14:11:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:27.582 14:11:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:27.582 14:11:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:27.582 14:11:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:27.582 14:11:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:27.582 14:11:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:27.840 14:11:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:27.840 14:11:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:27.840 14:11:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:27.840 14:11:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:27.840 14:11:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:27.840 14:11:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:27.840 14:11:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:27.840 14:11:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:27.840 14:11:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:27.840 14:11:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:27.840 14:11:55 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:27.840 14:11:55 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:27.840 14:11:55 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:28.409 14:11:55 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:29.344 [2024-11-06 14:11:56.918274] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:29.600 [2024-11-06 14:11:57.036584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.600 [2024-11-06 14:11:57.036604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:29.858 [2024-11-06 14:11:57.238183] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:29.858 [2024-11-06 14:11:57.238325] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:29.858 [2024-11-06 14:11:57.238342] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:31.233 spdk_app_start Round 2 00:07:31.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:31.233 14:11:58 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:31.233 14:11:58 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:31.233 14:11:58 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59376 /var/tmp/spdk-nbd.sock 00:07:31.233 14:11:58 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 59376 ']' 00:07:31.233 14:11:58 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:31.233 14:11:58 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:31.233 14:11:58 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:31.233 14:11:58 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:31.233 14:11:58 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:31.491 14:11:59 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:31.491 14:11:59 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:07:31.491 14:11:59 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:31.750 Malloc0 00:07:31.750 14:11:59 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:32.008 Malloc1 00:07:32.008 14:11:59 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:32.008 14:11:59 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:32.009 14:11:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:32.009 14:11:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:32.009 14:11:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:32.009 14:11:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:32.009 14:11:59 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:32.009 14:11:59 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:32.009 14:11:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:32.009 14:11:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:32.009 14:11:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:32.009 14:11:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:32.009 14:11:59 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:32.009 14:11:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:32.009 14:11:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:32.009 14:11:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:32.266 /dev/nbd0 00:07:32.266 14:11:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:32.266 14:11:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:32.266 14:11:59 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:07:32.266 14:11:59 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:07:32.266 14:11:59 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:32.266 14:11:59 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:32.266 14:11:59 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:07:32.266 14:11:59 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:07:32.266 14:11:59 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:32.266 14:11:59 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:32.266 14:11:59 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:32.266 1+0 records in 00:07:32.266 1+0 records out 00:07:32.266 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000294663 s, 13.9 MB/s 00:07:32.266 14:11:59 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:32.266 14:11:59 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:07:32.266 14:11:59 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:32.266 14:11:59 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:32.266 14:11:59 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:07:32.266 14:11:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:32.266 14:11:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:32.266 14:11:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:32.525 /dev/nbd1 00:07:32.525 14:12:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:32.525 14:12:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:32.525 14:12:00 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:07:32.525 14:12:00 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:07:32.525 14:12:00 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:32.525 14:12:00 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:32.525 14:12:00 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:07:32.525 14:12:00 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:07:32.525 14:12:00 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:32.525 14:12:00 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:32.525 14:12:00 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:32.525 1+0 records in 00:07:32.525 1+0 records out 00:07:32.525 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000291947 s, 14.0 MB/s 00:07:32.525 14:12:00 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:32.525 14:12:00 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:07:32.525 14:12:00 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:32.525 14:12:00 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:32.525 14:12:00 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:07:32.525 14:12:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:32.525 14:12:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:32.525 14:12:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:32.525 14:12:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:32.525 14:12:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:32.784 14:12:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:32.784 { 00:07:32.784 "nbd_device": "/dev/nbd0", 00:07:32.784 "bdev_name": "Malloc0" 00:07:32.784 }, 00:07:32.784 { 00:07:32.784 "nbd_device": "/dev/nbd1", 00:07:32.784 "bdev_name": "Malloc1" 00:07:32.784 } 00:07:32.784 ]' 00:07:32.784 14:12:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:32.784 { 00:07:32.784 "nbd_device": "/dev/nbd0", 00:07:32.784 "bdev_name": "Malloc0" 00:07:32.784 }, 00:07:32.784 { 00:07:32.784 "nbd_device": "/dev/nbd1", 00:07:32.784 "bdev_name": "Malloc1" 00:07:32.784 } 00:07:32.784 ]' 00:07:32.784 14:12:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:32.784 14:12:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:32.784 /dev/nbd1' 00:07:32.784 14:12:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:32.784 /dev/nbd1' 00:07:32.784 14:12:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:32.784 14:12:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:32.784 14:12:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:32.784 14:12:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:32.784 14:12:00 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:32.784 14:12:00 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:32.784 14:12:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:32.784 14:12:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:32.784 14:12:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:32.784 14:12:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:32.784 14:12:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:32.784 14:12:00 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:32.785 256+0 records in 00:07:32.785 256+0 records out 00:07:32.785 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00595054 s, 176 MB/s 00:07:32.785 14:12:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:32.785 14:12:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:33.044 256+0 records in 00:07:33.044 256+0 records out 00:07:33.044 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0368561 s, 28.5 MB/s 00:07:33.044 14:12:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:33.044 14:12:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:33.044 256+0 records in 00:07:33.044 256+0 records out 00:07:33.044 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0369539 s, 28.4 MB/s 00:07:33.044 14:12:00 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:33.044 14:12:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:33.044 14:12:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:33.044 14:12:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:33.044 14:12:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:33.044 14:12:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:33.044 14:12:00 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:33.044 14:12:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:33.044 14:12:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:33.044 14:12:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:33.044 14:12:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:33.044 14:12:00 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:33.044 14:12:00 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:33.044 14:12:00 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:33.044 14:12:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:33.044 14:12:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:33.044 14:12:00 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:33.044 14:12:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:33.044 14:12:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:33.303 14:12:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:33.303 14:12:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:33.303 14:12:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:33.303 14:12:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:33.303 14:12:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:33.303 14:12:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:33.303 14:12:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:33.303 14:12:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:33.303 14:12:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:33.303 14:12:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:33.561 14:12:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:33.561 14:12:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:33.561 14:12:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:33.561 14:12:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:33.561 14:12:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:33.561 14:12:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:33.561 14:12:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:33.561 14:12:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:33.561 14:12:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:33.561 14:12:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:33.561 14:12:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:33.561 14:12:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:33.561 14:12:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:33.561 14:12:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:33.819 14:12:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:33.819 14:12:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:33.819 14:12:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:33.819 14:12:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:33.819 14:12:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:33.819 14:12:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:33.819 14:12:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:33.819 14:12:01 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:33.819 14:12:01 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:33.819 14:12:01 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:34.077 14:12:01 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:35.456 [2024-11-06 14:12:02.816227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:35.456 [2024-11-06 14:12:02.931611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.456 [2024-11-06 14:12:02.931614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:35.714 [2024-11-06 14:12:03.132071] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:35.714 [2024-11-06 14:12:03.132195] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:35.714 [2024-11-06 14:12:03.132219] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:37.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:37.091 14:12:04 event.app_repeat -- event/event.sh@38 -- # waitforlisten 59376 /var/tmp/spdk-nbd.sock 00:07:37.091 14:12:04 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 59376 ']' 00:07:37.091 14:12:04 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:37.091 14:12:04 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:37.091 14:12:04 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:37.091 14:12:04 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:37.091 14:12:04 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:37.351 14:12:04 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:37.351 14:12:04 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:07:37.351 14:12:04 event.app_repeat -- event/event.sh@39 -- # killprocess 59376 00:07:37.351 14:12:04 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 59376 ']' 00:07:37.351 14:12:04 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 59376 00:07:37.351 14:12:04 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:07:37.351 14:12:04 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:37.351 14:12:04 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59376 00:07:37.351 killing process with pid 59376 00:07:37.351 14:12:04 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:37.351 14:12:04 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:37.351 14:12:04 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59376' 00:07:37.351 14:12:04 event.app_repeat -- common/autotest_common.sh@971 -- # kill 59376 00:07:37.351 14:12:04 event.app_repeat -- common/autotest_common.sh@976 -- # wait 59376 00:07:38.729 spdk_app_start is called in Round 0. 00:07:38.729 Shutdown signal received, stop current app iteration 00:07:38.729 Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 reinitialization... 00:07:38.729 spdk_app_start is called in Round 1. 00:07:38.729 Shutdown signal received, stop current app iteration 00:07:38.729 Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 reinitialization... 00:07:38.729 spdk_app_start is called in Round 2. 00:07:38.729 Shutdown signal received, stop current app iteration 00:07:38.729 Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 reinitialization... 00:07:38.729 spdk_app_start is called in Round 3. 00:07:38.729 Shutdown signal received, stop current app iteration 00:07:38.729 14:12:05 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:38.729 14:12:05 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:38.729 00:07:38.729 real 0m20.052s 00:07:38.729 user 0m42.829s 00:07:38.729 sys 0m3.383s 00:07:38.729 14:12:05 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:38.729 14:12:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:38.729 ************************************ 00:07:38.729 END TEST app_repeat 00:07:38.729 ************************************ 00:07:38.729 14:12:06 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:38.729 14:12:06 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:38.729 14:12:06 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:38.729 14:12:06 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:38.729 14:12:06 event -- common/autotest_common.sh@10 -- # set +x 00:07:38.729 ************************************ 00:07:38.729 START TEST cpu_locks 00:07:38.729 ************************************ 00:07:38.729 14:12:06 event.cpu_locks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:38.729 * Looking for test storage... 00:07:38.729 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:38.729 14:12:06 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:38.729 14:12:06 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:07:38.729 14:12:06 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:38.729 14:12:06 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:38.729 14:12:06 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:38.729 14:12:06 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:38.729 14:12:06 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:38.729 14:12:06 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:07:38.729 14:12:06 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:07:38.729 14:12:06 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:07:38.729 14:12:06 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:07:38.729 14:12:06 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:07:38.729 14:12:06 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:07:38.729 14:12:06 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:07:38.729 14:12:06 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:38.729 14:12:06 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:07:38.729 14:12:06 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:07:38.729 14:12:06 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:38.729 14:12:06 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:38.729 14:12:06 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:07:38.729 14:12:06 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:07:38.729 14:12:06 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:38.729 14:12:06 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:07:38.729 14:12:06 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:07:38.729 14:12:06 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:07:38.729 14:12:06 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:07:38.729 14:12:06 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:38.729 14:12:06 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:07:38.729 14:12:06 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:07:38.729 14:12:06 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:38.729 14:12:06 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:38.729 14:12:06 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:07:38.729 14:12:06 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:38.729 14:12:06 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:38.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.729 --rc genhtml_branch_coverage=1 00:07:38.729 --rc genhtml_function_coverage=1 00:07:38.729 --rc genhtml_legend=1 00:07:38.729 --rc geninfo_all_blocks=1 00:07:38.729 --rc geninfo_unexecuted_blocks=1 00:07:38.729 00:07:38.729 ' 00:07:38.729 14:12:06 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:38.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.729 --rc genhtml_branch_coverage=1 00:07:38.729 --rc genhtml_function_coverage=1 00:07:38.729 --rc genhtml_legend=1 00:07:38.729 --rc geninfo_all_blocks=1 00:07:38.729 --rc geninfo_unexecuted_blocks=1 00:07:38.729 00:07:38.729 ' 00:07:38.729 14:12:06 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:38.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.729 --rc genhtml_branch_coverage=1 00:07:38.729 --rc genhtml_function_coverage=1 00:07:38.729 --rc genhtml_legend=1 00:07:38.729 --rc geninfo_all_blocks=1 00:07:38.729 --rc geninfo_unexecuted_blocks=1 00:07:38.729 00:07:38.729 ' 00:07:38.729 14:12:06 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:38.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.729 --rc genhtml_branch_coverage=1 00:07:38.729 --rc genhtml_function_coverage=1 00:07:38.729 --rc genhtml_legend=1 00:07:38.729 --rc geninfo_all_blocks=1 00:07:38.729 --rc geninfo_unexecuted_blocks=1 00:07:38.729 00:07:38.729 ' 00:07:38.729 14:12:06 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:38.729 14:12:06 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:38.729 14:12:06 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:38.729 14:12:06 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:38.729 14:12:06 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:38.729 14:12:06 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:38.729 14:12:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:38.729 ************************************ 00:07:38.729 START TEST default_locks 00:07:38.729 ************************************ 00:07:38.729 14:12:06 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:07:38.729 14:12:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=59833 00:07:38.729 14:12:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:38.729 14:12:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 59833 00:07:38.729 14:12:06 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 59833 ']' 00:07:38.729 14:12:06 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:38.729 14:12:06 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:38.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:38.729 14:12:06 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:38.729 14:12:06 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:38.729 14:12:06 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:38.989 [2024-11-06 14:12:06.421078] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:07:38.989 [2024-11-06 14:12:06.421226] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59833 ] 00:07:38.989 [2024-11-06 14:12:06.600402] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.247 [2024-11-06 14:12:06.719595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.506 [2024-11-06 14:12:06.980517] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:40.094 14:12:07 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:40.094 14:12:07 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:07:40.094 14:12:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 59833 00:07:40.094 14:12:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 59833 00:07:40.094 14:12:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:40.661 14:12:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 59833 00:07:40.661 14:12:07 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 59833 ']' 00:07:40.661 14:12:07 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 59833 00:07:40.661 14:12:07 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:07:40.661 14:12:08 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:40.661 14:12:08 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59833 00:07:40.661 14:12:08 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:40.661 14:12:08 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:40.661 killing process with pid 59833 00:07:40.661 14:12:08 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59833' 00:07:40.661 14:12:08 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 59833 00:07:40.661 14:12:08 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 59833 00:07:43.205 14:12:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 59833 00:07:43.205 14:12:10 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:07:43.205 14:12:10 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59833 00:07:43.205 14:12:10 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:43.205 14:12:10 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:43.205 14:12:10 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:43.205 14:12:10 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:43.205 14:12:10 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 59833 00:07:43.205 14:12:10 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 59833 ']' 00:07:43.205 14:12:10 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:43.205 14:12:10 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:43.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:43.205 14:12:10 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:43.205 14:12:10 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:43.205 14:12:10 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:43.205 ERROR: process (pid: 59833) is no longer running 00:07:43.205 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (59833) - No such process 00:07:43.205 14:12:10 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:43.205 14:12:10 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:07:43.205 14:12:10 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:07:43.205 14:12:10 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:43.205 14:12:10 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:43.205 14:12:10 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:43.205 14:12:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:43.205 14:12:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:43.205 14:12:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:43.205 14:12:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:43.205 00:07:43.205 real 0m4.200s 00:07:43.205 user 0m4.110s 00:07:43.205 sys 0m0.749s 00:07:43.205 14:12:10 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:43.205 ************************************ 00:07:43.205 END TEST default_locks 00:07:43.205 ************************************ 00:07:43.205 14:12:10 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:43.205 14:12:10 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:43.205 14:12:10 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:43.205 14:12:10 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:43.205 14:12:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:43.205 ************************************ 00:07:43.205 START TEST default_locks_via_rpc 00:07:43.205 ************************************ 00:07:43.205 14:12:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:07:43.205 14:12:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=59909 00:07:43.205 14:12:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:43.205 14:12:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 59909 00:07:43.205 14:12:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59909 ']' 00:07:43.205 14:12:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:43.205 14:12:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:43.205 14:12:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:43.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:43.205 14:12:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:43.205 14:12:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:43.205 [2024-11-06 14:12:10.686573] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:07:43.205 [2024-11-06 14:12:10.686724] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59909 ] 00:07:43.464 [2024-11-06 14:12:10.875983] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.464 [2024-11-06 14:12:11.001885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.722 [2024-11-06 14:12:11.260497] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:44.291 14:12:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:44.291 14:12:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:07:44.291 14:12:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:44.291 14:12:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.291 14:12:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:44.291 14:12:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.291 14:12:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:44.291 14:12:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:44.291 14:12:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:44.291 14:12:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:44.291 14:12:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:44.291 14:12:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.291 14:12:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:44.291 14:12:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.291 14:12:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 59909 00:07:44.291 14:12:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 59909 00:07:44.291 14:12:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:44.858 14:12:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 59909 00:07:44.858 14:12:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 59909 ']' 00:07:44.858 14:12:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 59909 00:07:44.858 14:12:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:07:44.858 14:12:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:44.858 14:12:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59909 00:07:44.858 14:12:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:44.858 14:12:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:44.858 killing process with pid 59909 00:07:44.858 14:12:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59909' 00:07:44.858 14:12:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 59909 00:07:44.858 14:12:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 59909 00:07:47.388 00:07:47.388 real 0m4.271s 00:07:47.389 user 0m4.238s 00:07:47.389 sys 0m0.752s 00:07:47.389 14:12:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:47.389 ************************************ 00:07:47.389 END TEST default_locks_via_rpc 00:07:47.389 14:12:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:47.389 ************************************ 00:07:47.389 14:12:14 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:47.389 14:12:14 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:47.389 14:12:14 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:47.389 14:12:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:47.389 ************************************ 00:07:47.389 START TEST non_locking_app_on_locked_coremask 00:07:47.389 ************************************ 00:07:47.389 14:12:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:07:47.389 14:12:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59983 00:07:47.389 14:12:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:47.389 14:12:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59983 /var/tmp/spdk.sock 00:07:47.389 14:12:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59983 ']' 00:07:47.389 14:12:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:47.389 14:12:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:47.389 14:12:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:47.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:47.389 14:12:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:47.389 14:12:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:47.647 [2024-11-06 14:12:15.027788] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:07:47.647 [2024-11-06 14:12:15.027943] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59983 ] 00:07:47.647 [2024-11-06 14:12:15.207173] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.906 [2024-11-06 14:12:15.330629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.165 [2024-11-06 14:12:15.597759] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:48.759 14:12:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:48.759 14:12:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:07:48.759 14:12:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59999 00:07:48.759 14:12:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59999 /var/tmp/spdk2.sock 00:07:48.759 14:12:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:48.759 14:12:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59999 ']' 00:07:48.759 14:12:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:48.759 14:12:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:48.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:48.759 14:12:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:48.759 14:12:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:48.759 14:12:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:48.759 [2024-11-06 14:12:16.331992] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:07:48.760 [2024-11-06 14:12:16.332127] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59999 ] 00:07:49.019 [2024-11-06 14:12:16.520731] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:49.019 [2024-11-06 14:12:16.520794] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.278 [2024-11-06 14:12:16.769409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.845 [2024-11-06 14:12:17.330762] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:51.770 14:12:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:51.770 14:12:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:07:51.770 14:12:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59983 00:07:51.770 14:12:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59983 00:07:51.770 14:12:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:52.336 14:12:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59983 00:07:52.336 14:12:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59983 ']' 00:07:52.336 14:12:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 59983 00:07:52.336 14:12:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:07:52.336 14:12:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:52.336 14:12:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59983 00:07:52.604 14:12:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:52.605 14:12:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:52.605 killing process with pid 59983 00:07:52.605 14:12:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59983' 00:07:52.605 14:12:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 59983 00:07:52.605 14:12:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 59983 00:07:57.911 14:12:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59999 00:07:57.911 14:12:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59999 ']' 00:07:57.911 14:12:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 59999 00:07:57.911 14:12:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:07:57.911 14:12:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:57.911 14:12:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59999 00:07:57.911 killing process with pid 59999 00:07:57.911 14:12:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:57.911 14:12:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:57.911 14:12:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59999' 00:07:57.911 14:12:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 59999 00:07:57.911 14:12:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 59999 00:08:00.441 00:08:00.441 real 0m12.865s 00:08:00.441 user 0m13.261s 00:08:00.441 sys 0m1.565s 00:08:00.441 14:12:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:00.441 14:12:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:00.441 ************************************ 00:08:00.441 END TEST non_locking_app_on_locked_coremask 00:08:00.441 ************************************ 00:08:00.441 14:12:27 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:08:00.441 14:12:27 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:00.441 14:12:27 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:00.441 14:12:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:00.441 ************************************ 00:08:00.441 START TEST locking_app_on_unlocked_coremask 00:08:00.441 ************************************ 00:08:00.441 14:12:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:08:00.441 14:12:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60158 00:08:00.441 14:12:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:08:00.441 14:12:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60158 /var/tmp/spdk.sock 00:08:00.441 14:12:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 60158 ']' 00:08:00.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:00.441 14:12:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:00.441 14:12:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:00.441 14:12:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:00.441 14:12:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:00.441 14:12:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:00.441 [2024-11-06 14:12:27.971107] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:08:00.441 [2024-11-06 14:12:27.971525] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60158 ] 00:08:00.740 [2024-11-06 14:12:28.155409] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:00.740 [2024-11-06 14:12:28.155486] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.740 [2024-11-06 14:12:28.280173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.008 [2024-11-06 14:12:28.550171] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:01.575 14:12:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:01.575 14:12:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:08:01.575 14:12:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:01.575 14:12:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60185 00:08:01.575 14:12:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60185 /var/tmp/spdk2.sock 00:08:01.575 14:12:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 60185 ']' 00:08:01.575 14:12:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:01.575 14:12:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:01.575 14:12:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:01.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:01.575 14:12:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:01.575 14:12:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:01.833 [2024-11-06 14:12:29.296593] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:08:01.833 [2024-11-06 14:12:29.296746] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60185 ] 00:08:02.092 [2024-11-06 14:12:29.486935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.350 [2024-11-06 14:12:29.741237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.918 [2024-11-06 14:12:30.291417] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:04.820 14:12:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:04.820 14:12:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:08:04.820 14:12:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60185 00:08:04.820 14:12:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60185 00:08:04.820 14:12:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:05.387 14:12:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60158 00:08:05.387 14:12:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 60158 ']' 00:08:05.387 14:12:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 60158 00:08:05.387 14:12:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:08:05.387 14:12:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:05.387 14:12:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60158 00:08:05.387 killing process with pid 60158 00:08:05.387 14:12:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:05.387 14:12:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:05.387 14:12:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60158' 00:08:05.387 14:12:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 60158 00:08:05.387 14:12:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 60158 00:08:10.691 14:12:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60185 00:08:10.691 14:12:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 60185 ']' 00:08:10.691 14:12:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 60185 00:08:10.691 14:12:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:08:10.691 14:12:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:10.691 14:12:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60185 00:08:10.691 killing process with pid 60185 00:08:10.691 14:12:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:10.691 14:12:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:10.691 14:12:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60185' 00:08:10.691 14:12:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 60185 00:08:10.691 14:12:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 60185 00:08:12.595 00:08:12.595 real 0m12.242s 00:08:12.595 user 0m12.623s 00:08:12.595 sys 0m1.542s 00:08:12.595 14:12:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:12.595 ************************************ 00:08:12.595 END TEST locking_app_on_unlocked_coremask 00:08:12.595 ************************************ 00:08:12.595 14:12:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:12.595 14:12:40 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:08:12.595 14:12:40 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:12.595 14:12:40 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:12.595 14:12:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:12.595 ************************************ 00:08:12.595 START TEST locking_app_on_locked_coremask 00:08:12.595 ************************************ 00:08:12.595 14:12:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:08:12.595 14:12:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60333 00:08:12.595 14:12:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:12.595 14:12:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60333 /var/tmp/spdk.sock 00:08:12.595 14:12:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 60333 ']' 00:08:12.595 14:12:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:12.595 14:12:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:12.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:12.595 14:12:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:12.595 14:12:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:12.595 14:12:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:12.853 [2024-11-06 14:12:40.287989] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:08:12.853 [2024-11-06 14:12:40.288344] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60333 ] 00:08:12.853 [2024-11-06 14:12:40.472937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.112 [2024-11-06 14:12:40.593590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.371 [2024-11-06 14:12:40.860754] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:13.946 14:12:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:13.946 14:12:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:08:13.946 14:12:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60355 00:08:13.946 14:12:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60355 /var/tmp/spdk2.sock 00:08:13.946 14:12:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:13.946 14:12:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:08:13.946 14:12:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 60355 /var/tmp/spdk2.sock 00:08:13.946 14:12:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:08:13.946 14:12:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:13.946 14:12:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:08:13.946 14:12:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:13.946 14:12:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 60355 /var/tmp/spdk2.sock 00:08:13.946 14:12:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 60355 ']' 00:08:13.946 14:12:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:13.946 14:12:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:13.946 14:12:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:13.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:13.946 14:12:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:13.946 14:12:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:14.246 [2024-11-06 14:12:41.630549] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:08:14.246 [2024-11-06 14:12:41.631356] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60355 ] 00:08:14.246 [2024-11-06 14:12:41.839081] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60333 has claimed it. 00:08:14.246 [2024-11-06 14:12:41.839163] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:14.814 ERROR: process (pid: 60355) is no longer running 00:08:14.814 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (60355) - No such process 00:08:14.814 14:12:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:14.814 14:12:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:08:14.814 14:12:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:08:14.814 14:12:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:14.814 14:12:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:14.814 14:12:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:14.814 14:12:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60333 00:08:14.814 14:12:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60333 00:08:14.814 14:12:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:15.381 14:12:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60333 00:08:15.381 14:12:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 60333 ']' 00:08:15.381 14:12:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 60333 00:08:15.381 14:12:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:08:15.381 14:12:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:15.381 14:12:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60333 00:08:15.381 killing process with pid 60333 00:08:15.381 14:12:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:15.381 14:12:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:15.381 14:12:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60333' 00:08:15.381 14:12:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 60333 00:08:15.381 14:12:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 60333 00:08:17.912 00:08:17.912 real 0m5.122s 00:08:17.912 user 0m5.337s 00:08:17.912 sys 0m0.993s 00:08:17.912 14:12:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:17.912 14:12:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:17.912 ************************************ 00:08:17.912 END TEST locking_app_on_locked_coremask 00:08:17.912 ************************************ 00:08:17.912 14:12:45 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:08:17.912 14:12:45 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:17.912 14:12:45 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:17.912 14:12:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:17.912 ************************************ 00:08:17.912 START TEST locking_overlapped_coremask 00:08:17.912 ************************************ 00:08:17.912 14:12:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:08:17.912 14:12:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60424 00:08:17.912 14:12:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60424 /var/tmp/spdk.sock 00:08:17.912 14:12:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:08:17.912 14:12:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 60424 ']' 00:08:17.912 14:12:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:17.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:17.912 14:12:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:17.912 14:12:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:17.912 14:12:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:17.912 14:12:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:17.912 [2024-11-06 14:12:45.491622] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:08:17.912 [2024-11-06 14:12:45.491778] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60424 ] 00:08:18.171 [2024-11-06 14:12:45.682855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:18.430 [2024-11-06 14:12:45.814325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:18.430 [2024-11-06 14:12:45.814416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.430 [2024-11-06 14:12:45.814450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:18.689 [2024-11-06 14:12:46.092240] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:19.257 14:12:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:19.257 14:12:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:08:19.257 14:12:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60448 00:08:19.257 14:12:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60448 /var/tmp/spdk2.sock 00:08:19.257 14:12:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:08:19.257 14:12:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:08:19.257 14:12:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 60448 /var/tmp/spdk2.sock 00:08:19.257 14:12:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:08:19.257 14:12:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:19.257 14:12:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:08:19.257 14:12:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:19.257 14:12:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 60448 /var/tmp/spdk2.sock 00:08:19.257 14:12:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 60448 ']' 00:08:19.257 14:12:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:19.257 14:12:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:19.257 14:12:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:19.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:19.257 14:12:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:19.257 14:12:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:19.516 [2024-11-06 14:12:46.915502] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:08:19.516 [2024-11-06 14:12:46.915648] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60448 ] 00:08:19.516 [2024-11-06 14:12:47.109263] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60424 has claimed it. 00:08:19.516 [2024-11-06 14:12:47.109360] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:20.084 ERROR: process (pid: 60448) is no longer running 00:08:20.084 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (60448) - No such process 00:08:20.084 14:12:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:20.084 14:12:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:08:20.084 14:12:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:08:20.084 14:12:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:20.084 14:12:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:20.084 14:12:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:20.084 14:12:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:08:20.084 14:12:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:20.084 14:12:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:20.084 14:12:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:20.084 14:12:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60424 00:08:20.084 14:12:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 60424 ']' 00:08:20.084 14:12:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 60424 00:08:20.084 14:12:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:08:20.084 14:12:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:20.084 14:12:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60424 00:08:20.084 14:12:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:20.084 14:12:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:20.084 14:12:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60424' 00:08:20.084 killing process with pid 60424 00:08:20.084 14:12:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 60424 00:08:20.084 14:12:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 60424 00:08:22.657 00:08:22.657 real 0m4.718s 00:08:22.657 user 0m12.675s 00:08:22.657 sys 0m0.759s 00:08:22.657 ************************************ 00:08:22.657 END TEST locking_overlapped_coremask 00:08:22.657 ************************************ 00:08:22.657 14:12:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:22.657 14:12:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:22.657 14:12:50 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:08:22.657 14:12:50 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:22.657 14:12:50 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:22.657 14:12:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:22.657 ************************************ 00:08:22.657 START TEST locking_overlapped_coremask_via_rpc 00:08:22.657 ************************************ 00:08:22.657 14:12:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:08:22.658 14:12:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60512 00:08:22.658 14:12:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:08:22.658 14:12:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 60512 /var/tmp/spdk.sock 00:08:22.658 14:12:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 60512 ']' 00:08:22.658 14:12:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:22.658 14:12:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:22.658 14:12:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:22.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:22.658 14:12:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:22.658 14:12:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:22.658 [2024-11-06 14:12:50.276167] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:08:22.658 [2024-11-06 14:12:50.276322] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60512 ] 00:08:22.916 [2024-11-06 14:12:50.465553] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:22.916 [2024-11-06 14:12:50.465610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:23.173 [2024-11-06 14:12:50.597910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:23.173 [2024-11-06 14:12:50.598027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.173 [2024-11-06 14:12:50.598058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:23.433 [2024-11-06 14:12:50.870091] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:24.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:24.050 14:12:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:24.050 14:12:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:08:24.050 14:12:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=60534 00:08:24.050 14:12:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 60534 /var/tmp/spdk2.sock 00:08:24.050 14:12:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 60534 ']' 00:08:24.050 14:12:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:24.050 14:12:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:24.050 14:12:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:24.050 14:12:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:24.050 14:12:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:24.050 14:12:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:08:24.309 [2024-11-06 14:12:51.632710] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:08:24.309 [2024-11-06 14:12:51.632872] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60534 ] 00:08:24.309 [2024-11-06 14:12:51.824790] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:24.309 [2024-11-06 14:12:51.824877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:24.565 [2024-11-06 14:12:52.085898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:24.565 [2024-11-06 14:12:52.088921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:24.565 [2024-11-06 14:12:52.088928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:25.129 [2024-11-06 14:12:52.635114] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:27.029 14:12:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:27.029 14:12:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:08:27.029 14:12:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:08:27.029 14:12:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.029 14:12:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:27.029 14:12:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.029 14:12:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:27.029 14:12:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:08:27.029 14:12:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:27.029 14:12:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:27.029 14:12:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:27.029 14:12:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:27.029 14:12:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:27.029 14:12:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:27.029 14:12:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.029 14:12:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:27.029 [2024-11-06 14:12:54.223078] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60512 has claimed it. 00:08:27.029 request: 00:08:27.029 { 00:08:27.029 "method": "framework_enable_cpumask_locks", 00:08:27.029 "req_id": 1 00:08:27.029 } 00:08:27.029 Got JSON-RPC error response 00:08:27.029 response: 00:08:27.029 { 00:08:27.029 "code": -32603, 00:08:27.029 "message": "Failed to claim CPU core: 2" 00:08:27.029 } 00:08:27.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:27.029 14:12:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:27.029 14:12:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:08:27.029 14:12:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:27.029 14:12:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:27.029 14:12:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:27.029 14:12:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 60512 /var/tmp/spdk.sock 00:08:27.029 14:12:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 60512 ']' 00:08:27.029 14:12:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:27.029 14:12:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:27.029 14:12:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:27.029 14:12:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:27.029 14:12:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:27.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:27.029 14:12:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:27.029 14:12:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:08:27.029 14:12:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 60534 /var/tmp/spdk2.sock 00:08:27.029 14:12:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 60534 ']' 00:08:27.029 14:12:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:27.029 14:12:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:27.029 14:12:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:27.029 14:12:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:27.029 14:12:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:27.287 ************************************ 00:08:27.287 END TEST locking_overlapped_coremask_via_rpc 00:08:27.287 ************************************ 00:08:27.287 14:12:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:27.287 14:12:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:08:27.287 14:12:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:08:27.287 14:12:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:27.287 14:12:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:27.287 14:12:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:27.287 00:08:27.287 real 0m4.522s 00:08:27.287 user 0m1.287s 00:08:27.287 sys 0m0.270s 00:08:27.287 14:12:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:27.287 14:12:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:27.287 14:12:54 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:08:27.287 14:12:54 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60512 ]] 00:08:27.287 14:12:54 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60512 00:08:27.288 14:12:54 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 60512 ']' 00:08:27.288 14:12:54 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 60512 00:08:27.288 14:12:54 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:08:27.288 14:12:54 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:27.288 14:12:54 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60512 00:08:27.288 killing process with pid 60512 00:08:27.288 14:12:54 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:27.288 14:12:54 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:27.288 14:12:54 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60512' 00:08:27.288 14:12:54 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 60512 00:08:27.288 14:12:54 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 60512 00:08:29.815 14:12:57 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60534 ]] 00:08:29.815 14:12:57 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60534 00:08:29.815 14:12:57 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 60534 ']' 00:08:29.815 14:12:57 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 60534 00:08:29.815 14:12:57 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:08:29.815 14:12:57 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:29.815 14:12:57 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60534 00:08:29.815 killing process with pid 60534 00:08:29.815 14:12:57 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:08:29.815 14:12:57 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:08:29.815 14:12:57 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60534' 00:08:29.815 14:12:57 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 60534 00:08:29.815 14:12:57 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 60534 00:08:32.376 14:12:59 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:32.376 14:12:59 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:08:32.376 14:12:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60512 ]] 00:08:32.376 14:12:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60512 00:08:32.376 14:12:59 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 60512 ']' 00:08:32.376 Process with pid 60512 is not found 00:08:32.376 Process with pid 60534 is not found 00:08:32.376 14:12:59 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 60512 00:08:32.376 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (60512) - No such process 00:08:32.376 14:12:59 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 60512 is not found' 00:08:32.376 14:12:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60534 ]] 00:08:32.376 14:12:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60534 00:08:32.376 14:12:59 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 60534 ']' 00:08:32.376 14:12:59 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 60534 00:08:32.376 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (60534) - No such process 00:08:32.376 14:12:59 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 60534 is not found' 00:08:32.376 14:12:59 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:32.376 00:08:32.376 real 0m53.678s 00:08:32.376 user 1m29.967s 00:08:32.376 sys 0m8.028s 00:08:32.376 14:12:59 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:32.376 14:12:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:32.376 ************************************ 00:08:32.376 END TEST cpu_locks 00:08:32.376 ************************************ 00:08:32.376 ************************************ 00:08:32.376 END TEST event 00:08:32.376 ************************************ 00:08:32.376 00:08:32.376 real 1m26.152s 00:08:32.376 user 2m34.283s 00:08:32.376 sys 0m12.774s 00:08:32.376 14:12:59 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:32.376 14:12:59 event -- common/autotest_common.sh@10 -- # set +x 00:08:32.376 14:12:59 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:32.376 14:12:59 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:32.376 14:12:59 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:32.376 14:12:59 -- common/autotest_common.sh@10 -- # set +x 00:08:32.376 ************************************ 00:08:32.376 START TEST thread 00:08:32.376 ************************************ 00:08:32.376 14:12:59 thread -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:32.376 * Looking for test storage... 00:08:32.376 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:08:32.376 14:13:00 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:32.376 14:13:00 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:08:32.376 14:13:00 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:32.634 14:13:00 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:32.634 14:13:00 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:32.634 14:13:00 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:32.634 14:13:00 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:32.634 14:13:00 thread -- scripts/common.sh@336 -- # IFS=.-: 00:08:32.634 14:13:00 thread -- scripts/common.sh@336 -- # read -ra ver1 00:08:32.634 14:13:00 thread -- scripts/common.sh@337 -- # IFS=.-: 00:08:32.634 14:13:00 thread -- scripts/common.sh@337 -- # read -ra ver2 00:08:32.634 14:13:00 thread -- scripts/common.sh@338 -- # local 'op=<' 00:08:32.634 14:13:00 thread -- scripts/common.sh@340 -- # ver1_l=2 00:08:32.634 14:13:00 thread -- scripts/common.sh@341 -- # ver2_l=1 00:08:32.634 14:13:00 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:32.634 14:13:00 thread -- scripts/common.sh@344 -- # case "$op" in 00:08:32.634 14:13:00 thread -- scripts/common.sh@345 -- # : 1 00:08:32.634 14:13:00 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:32.634 14:13:00 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:32.634 14:13:00 thread -- scripts/common.sh@365 -- # decimal 1 00:08:32.634 14:13:00 thread -- scripts/common.sh@353 -- # local d=1 00:08:32.634 14:13:00 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:32.634 14:13:00 thread -- scripts/common.sh@355 -- # echo 1 00:08:32.634 14:13:00 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:08:32.634 14:13:00 thread -- scripts/common.sh@366 -- # decimal 2 00:08:32.634 14:13:00 thread -- scripts/common.sh@353 -- # local d=2 00:08:32.634 14:13:00 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:32.634 14:13:00 thread -- scripts/common.sh@355 -- # echo 2 00:08:32.634 14:13:00 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:08:32.634 14:13:00 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:32.634 14:13:00 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:32.634 14:13:00 thread -- scripts/common.sh@368 -- # return 0 00:08:32.634 14:13:00 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:32.634 14:13:00 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:32.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.634 --rc genhtml_branch_coverage=1 00:08:32.634 --rc genhtml_function_coverage=1 00:08:32.634 --rc genhtml_legend=1 00:08:32.634 --rc geninfo_all_blocks=1 00:08:32.634 --rc geninfo_unexecuted_blocks=1 00:08:32.634 00:08:32.634 ' 00:08:32.634 14:13:00 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:32.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.634 --rc genhtml_branch_coverage=1 00:08:32.634 --rc genhtml_function_coverage=1 00:08:32.634 --rc genhtml_legend=1 00:08:32.634 --rc geninfo_all_blocks=1 00:08:32.634 --rc geninfo_unexecuted_blocks=1 00:08:32.634 00:08:32.634 ' 00:08:32.634 14:13:00 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:32.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.634 --rc genhtml_branch_coverage=1 00:08:32.634 --rc genhtml_function_coverage=1 00:08:32.634 --rc genhtml_legend=1 00:08:32.634 --rc geninfo_all_blocks=1 00:08:32.634 --rc geninfo_unexecuted_blocks=1 00:08:32.634 00:08:32.634 ' 00:08:32.634 14:13:00 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:32.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.634 --rc genhtml_branch_coverage=1 00:08:32.634 --rc genhtml_function_coverage=1 00:08:32.634 --rc genhtml_legend=1 00:08:32.634 --rc geninfo_all_blocks=1 00:08:32.634 --rc geninfo_unexecuted_blocks=1 00:08:32.634 00:08:32.634 ' 00:08:32.634 14:13:00 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:32.634 14:13:00 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:08:32.634 14:13:00 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:32.634 14:13:00 thread -- common/autotest_common.sh@10 -- # set +x 00:08:32.634 ************************************ 00:08:32.634 START TEST thread_poller_perf 00:08:32.634 ************************************ 00:08:32.634 14:13:00 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:32.635 [2024-11-06 14:13:00.171252] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:08:32.635 [2024-11-06 14:13:00.171552] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60736 ] 00:08:32.893 [2024-11-06 14:13:00.360853] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.893 [2024-11-06 14:13:00.483038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.893 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:08:34.266 [2024-11-06T14:13:01.901Z] ====================================== 00:08:34.266 [2024-11-06T14:13:01.901Z] busy:2503066014 (cyc) 00:08:34.266 [2024-11-06T14:13:01.901Z] total_run_count: 387000 00:08:34.266 [2024-11-06T14:13:01.901Z] tsc_hz: 2490000000 (cyc) 00:08:34.266 [2024-11-06T14:13:01.901Z] ====================================== 00:08:34.266 [2024-11-06T14:13:01.901Z] poller_cost: 6467 (cyc), 2597 (nsec) 00:08:34.266 00:08:34.266 real 0m1.608s 00:08:34.266 user 0m1.378s 00:08:34.266 sys 0m0.120s 00:08:34.266 14:13:01 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:34.266 14:13:01 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:34.266 ************************************ 00:08:34.266 END TEST thread_poller_perf 00:08:34.266 ************************************ 00:08:34.266 14:13:01 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:34.266 14:13:01 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:08:34.266 14:13:01 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:34.266 14:13:01 thread -- common/autotest_common.sh@10 -- # set +x 00:08:34.266 ************************************ 00:08:34.266 START TEST thread_poller_perf 00:08:34.266 ************************************ 00:08:34.266 14:13:01 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:34.266 [2024-11-06 14:13:01.844287] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:08:34.266 [2024-11-06 14:13:01.844585] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60767 ] 00:08:34.524 [2024-11-06 14:13:02.031440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.524 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:08:34.524 [2024-11-06 14:13:02.155731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.902 [2024-11-06T14:13:03.537Z] ====================================== 00:08:35.902 [2024-11-06T14:13:03.537Z] busy:2496073722 (cyc) 00:08:35.902 [2024-11-06T14:13:03.537Z] total_run_count: 5036000 00:08:35.902 [2024-11-06T14:13:03.537Z] tsc_hz: 2490000000 (cyc) 00:08:35.902 [2024-11-06T14:13:03.537Z] ====================================== 00:08:35.902 [2024-11-06T14:13:03.537Z] poller_cost: 495 (cyc), 198 (nsec) 00:08:35.902 00:08:35.902 real 0m1.607s 00:08:35.902 user 0m1.378s 00:08:35.902 sys 0m0.120s 00:08:35.902 14:13:03 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:35.902 ************************************ 00:08:35.902 END TEST thread_poller_perf 00:08:35.902 ************************************ 00:08:35.902 14:13:03 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:35.902 14:13:03 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:08:35.902 ************************************ 00:08:35.902 END TEST thread 00:08:35.902 ************************************ 00:08:35.902 00:08:35.902 real 0m3.589s 00:08:35.902 user 0m2.939s 00:08:35.902 sys 0m0.448s 00:08:35.902 14:13:03 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:35.902 14:13:03 thread -- common/autotest_common.sh@10 -- # set +x 00:08:35.902 14:13:03 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:08:35.902 14:13:03 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:35.902 14:13:03 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:35.902 14:13:03 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:35.902 14:13:03 -- common/autotest_common.sh@10 -- # set +x 00:08:35.902 ************************************ 00:08:35.902 START TEST app_cmdline 00:08:35.902 ************************************ 00:08:35.902 14:13:03 app_cmdline -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:36.161 * Looking for test storage... 00:08:36.161 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:36.161 14:13:03 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:36.161 14:13:03 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:08:36.161 14:13:03 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:36.161 14:13:03 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:36.161 14:13:03 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:36.161 14:13:03 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:36.161 14:13:03 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:36.161 14:13:03 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:08:36.161 14:13:03 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:08:36.161 14:13:03 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:08:36.161 14:13:03 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:08:36.161 14:13:03 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:08:36.161 14:13:03 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:08:36.161 14:13:03 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:08:36.161 14:13:03 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:36.161 14:13:03 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:08:36.161 14:13:03 app_cmdline -- scripts/common.sh@345 -- # : 1 00:08:36.161 14:13:03 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:36.161 14:13:03 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:36.161 14:13:03 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:08:36.161 14:13:03 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:08:36.161 14:13:03 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:36.161 14:13:03 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:08:36.161 14:13:03 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:08:36.161 14:13:03 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:08:36.161 14:13:03 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:08:36.161 14:13:03 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:36.161 14:13:03 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:08:36.161 14:13:03 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:08:36.161 14:13:03 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:36.162 14:13:03 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:36.162 14:13:03 app_cmdline -- scripts/common.sh@368 -- # return 0 00:08:36.162 14:13:03 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:36.162 14:13:03 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:36.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.162 --rc genhtml_branch_coverage=1 00:08:36.162 --rc genhtml_function_coverage=1 00:08:36.162 --rc genhtml_legend=1 00:08:36.162 --rc geninfo_all_blocks=1 00:08:36.162 --rc geninfo_unexecuted_blocks=1 00:08:36.162 00:08:36.162 ' 00:08:36.162 14:13:03 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:36.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.162 --rc genhtml_branch_coverage=1 00:08:36.162 --rc genhtml_function_coverage=1 00:08:36.162 --rc genhtml_legend=1 00:08:36.162 --rc geninfo_all_blocks=1 00:08:36.162 --rc geninfo_unexecuted_blocks=1 00:08:36.162 00:08:36.162 ' 00:08:36.162 14:13:03 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:36.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.162 --rc genhtml_branch_coverage=1 00:08:36.162 --rc genhtml_function_coverage=1 00:08:36.162 --rc genhtml_legend=1 00:08:36.162 --rc geninfo_all_blocks=1 00:08:36.162 --rc geninfo_unexecuted_blocks=1 00:08:36.162 00:08:36.162 ' 00:08:36.162 14:13:03 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:36.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.162 --rc genhtml_branch_coverage=1 00:08:36.162 --rc genhtml_function_coverage=1 00:08:36.162 --rc genhtml_legend=1 00:08:36.162 --rc geninfo_all_blocks=1 00:08:36.162 --rc geninfo_unexecuted_blocks=1 00:08:36.162 00:08:36.162 ' 00:08:36.162 14:13:03 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:36.162 14:13:03 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=60856 00:08:36.162 14:13:03 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:36.162 14:13:03 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 60856 00:08:36.162 14:13:03 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 60856 ']' 00:08:36.162 14:13:03 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:36.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:36.162 14:13:03 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:36.162 14:13:03 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:36.162 14:13:03 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:36.162 14:13:03 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:36.421 [2024-11-06 14:13:03.878764] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:08:36.421 [2024-11-06 14:13:03.879203] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60856 ] 00:08:36.679 [2024-11-06 14:13:04.069223] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.679 [2024-11-06 14:13:04.188271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.938 [2024-11-06 14:13:04.454992] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:37.506 14:13:05 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:37.506 14:13:05 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:08:37.506 14:13:05 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:08:37.768 { 00:08:37.768 "version": "SPDK v25.01-pre git sha1 d1c46ed8e", 00:08:37.768 "fields": { 00:08:37.768 "major": 25, 00:08:37.768 "minor": 1, 00:08:37.768 "patch": 0, 00:08:37.768 "suffix": "-pre", 00:08:37.768 "commit": "d1c46ed8e" 00:08:37.768 } 00:08:37.768 } 00:08:37.768 14:13:05 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:37.768 14:13:05 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:37.768 14:13:05 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:37.768 14:13:05 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:37.768 14:13:05 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:37.768 14:13:05 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:37.768 14:13:05 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.768 14:13:05 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:37.768 14:13:05 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:37.768 14:13:05 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.768 14:13:05 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:37.768 14:13:05 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:37.768 14:13:05 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:37.768 14:13:05 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:08:37.768 14:13:05 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:37.768 14:13:05 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:37.768 14:13:05 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:37.768 14:13:05 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:37.768 14:13:05 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:37.768 14:13:05 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:37.768 14:13:05 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:37.768 14:13:05 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:37.768 14:13:05 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:37.768 14:13:05 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:38.034 request: 00:08:38.034 { 00:08:38.034 "method": "env_dpdk_get_mem_stats", 00:08:38.034 "req_id": 1 00:08:38.034 } 00:08:38.034 Got JSON-RPC error response 00:08:38.034 response: 00:08:38.034 { 00:08:38.034 "code": -32601, 00:08:38.034 "message": "Method not found" 00:08:38.034 } 00:08:38.034 14:13:05 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:08:38.034 14:13:05 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:38.034 14:13:05 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:38.034 14:13:05 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:38.034 14:13:05 app_cmdline -- app/cmdline.sh@1 -- # killprocess 60856 00:08:38.034 14:13:05 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 60856 ']' 00:08:38.034 14:13:05 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 60856 00:08:38.034 14:13:05 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:08:38.034 14:13:05 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:38.034 14:13:05 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60856 00:08:38.034 killing process with pid 60856 00:08:38.034 14:13:05 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:38.034 14:13:05 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:38.034 14:13:05 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60856' 00:08:38.034 14:13:05 app_cmdline -- common/autotest_common.sh@971 -- # kill 60856 00:08:38.034 14:13:05 app_cmdline -- common/autotest_common.sh@976 -- # wait 60856 00:08:40.567 ************************************ 00:08:40.567 END TEST app_cmdline 00:08:40.567 ************************************ 00:08:40.567 00:08:40.567 real 0m4.521s 00:08:40.567 user 0m4.732s 00:08:40.567 sys 0m0.732s 00:08:40.567 14:13:08 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:40.567 14:13:08 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:40.567 14:13:08 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:40.567 14:13:08 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:40.567 14:13:08 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:40.567 14:13:08 -- common/autotest_common.sh@10 -- # set +x 00:08:40.567 ************************************ 00:08:40.567 START TEST version 00:08:40.567 ************************************ 00:08:40.567 14:13:08 version -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:40.870 * Looking for test storage... 00:08:40.870 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:40.870 14:13:08 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:40.870 14:13:08 version -- common/autotest_common.sh@1691 -- # lcov --version 00:08:40.870 14:13:08 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:40.870 14:13:08 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:40.870 14:13:08 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:40.870 14:13:08 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:40.870 14:13:08 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:40.870 14:13:08 version -- scripts/common.sh@336 -- # IFS=.-: 00:08:40.870 14:13:08 version -- scripts/common.sh@336 -- # read -ra ver1 00:08:40.870 14:13:08 version -- scripts/common.sh@337 -- # IFS=.-: 00:08:40.870 14:13:08 version -- scripts/common.sh@337 -- # read -ra ver2 00:08:40.870 14:13:08 version -- scripts/common.sh@338 -- # local 'op=<' 00:08:40.870 14:13:08 version -- scripts/common.sh@340 -- # ver1_l=2 00:08:40.870 14:13:08 version -- scripts/common.sh@341 -- # ver2_l=1 00:08:40.870 14:13:08 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:40.870 14:13:08 version -- scripts/common.sh@344 -- # case "$op" in 00:08:40.870 14:13:08 version -- scripts/common.sh@345 -- # : 1 00:08:40.870 14:13:08 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:40.870 14:13:08 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:40.870 14:13:08 version -- scripts/common.sh@365 -- # decimal 1 00:08:40.870 14:13:08 version -- scripts/common.sh@353 -- # local d=1 00:08:40.870 14:13:08 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:40.870 14:13:08 version -- scripts/common.sh@355 -- # echo 1 00:08:40.870 14:13:08 version -- scripts/common.sh@365 -- # ver1[v]=1 00:08:40.870 14:13:08 version -- scripts/common.sh@366 -- # decimal 2 00:08:40.870 14:13:08 version -- scripts/common.sh@353 -- # local d=2 00:08:40.870 14:13:08 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:40.870 14:13:08 version -- scripts/common.sh@355 -- # echo 2 00:08:40.870 14:13:08 version -- scripts/common.sh@366 -- # ver2[v]=2 00:08:40.870 14:13:08 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:40.870 14:13:08 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:40.870 14:13:08 version -- scripts/common.sh@368 -- # return 0 00:08:40.870 14:13:08 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:40.870 14:13:08 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:40.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.870 --rc genhtml_branch_coverage=1 00:08:40.870 --rc genhtml_function_coverage=1 00:08:40.870 --rc genhtml_legend=1 00:08:40.870 --rc geninfo_all_blocks=1 00:08:40.870 --rc geninfo_unexecuted_blocks=1 00:08:40.870 00:08:40.870 ' 00:08:40.870 14:13:08 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:40.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.870 --rc genhtml_branch_coverage=1 00:08:40.870 --rc genhtml_function_coverage=1 00:08:40.870 --rc genhtml_legend=1 00:08:40.870 --rc geninfo_all_blocks=1 00:08:40.870 --rc geninfo_unexecuted_blocks=1 00:08:40.870 00:08:40.870 ' 00:08:40.870 14:13:08 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:40.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.870 --rc genhtml_branch_coverage=1 00:08:40.870 --rc genhtml_function_coverage=1 00:08:40.870 --rc genhtml_legend=1 00:08:40.870 --rc geninfo_all_blocks=1 00:08:40.870 --rc geninfo_unexecuted_blocks=1 00:08:40.870 00:08:40.870 ' 00:08:40.870 14:13:08 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:40.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.870 --rc genhtml_branch_coverage=1 00:08:40.871 --rc genhtml_function_coverage=1 00:08:40.871 --rc genhtml_legend=1 00:08:40.871 --rc geninfo_all_blocks=1 00:08:40.871 --rc geninfo_unexecuted_blocks=1 00:08:40.871 00:08:40.871 ' 00:08:40.871 14:13:08 version -- app/version.sh@17 -- # get_header_version major 00:08:40.871 14:13:08 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:40.871 14:13:08 version -- app/version.sh@14 -- # cut -f2 00:08:40.871 14:13:08 version -- app/version.sh@14 -- # tr -d '"' 00:08:40.871 14:13:08 version -- app/version.sh@17 -- # major=25 00:08:40.871 14:13:08 version -- app/version.sh@18 -- # get_header_version minor 00:08:40.871 14:13:08 version -- app/version.sh@14 -- # tr -d '"' 00:08:40.871 14:13:08 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:40.871 14:13:08 version -- app/version.sh@14 -- # cut -f2 00:08:40.871 14:13:08 version -- app/version.sh@18 -- # minor=1 00:08:40.871 14:13:08 version -- app/version.sh@19 -- # get_header_version patch 00:08:40.871 14:13:08 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:40.871 14:13:08 version -- app/version.sh@14 -- # cut -f2 00:08:40.871 14:13:08 version -- app/version.sh@14 -- # tr -d '"' 00:08:40.871 14:13:08 version -- app/version.sh@19 -- # patch=0 00:08:40.871 14:13:08 version -- app/version.sh@20 -- # get_header_version suffix 00:08:40.871 14:13:08 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:40.871 14:13:08 version -- app/version.sh@14 -- # cut -f2 00:08:40.871 14:13:08 version -- app/version.sh@14 -- # tr -d '"' 00:08:40.871 14:13:08 version -- app/version.sh@20 -- # suffix=-pre 00:08:40.871 14:13:08 version -- app/version.sh@22 -- # version=25.1 00:08:40.871 14:13:08 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:40.871 14:13:08 version -- app/version.sh@28 -- # version=25.1rc0 00:08:40.871 14:13:08 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:40.871 14:13:08 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:41.156 14:13:08 version -- app/version.sh@30 -- # py_version=25.1rc0 00:08:41.156 14:13:08 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:08:41.156 ************************************ 00:08:41.156 END TEST version 00:08:41.156 ************************************ 00:08:41.156 00:08:41.156 real 0m0.363s 00:08:41.156 user 0m0.215s 00:08:41.156 sys 0m0.205s 00:08:41.156 14:13:08 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:41.156 14:13:08 version -- common/autotest_common.sh@10 -- # set +x 00:08:41.156 14:13:08 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:08:41.156 14:13:08 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:08:41.156 14:13:08 -- spdk/autotest.sh@194 -- # uname -s 00:08:41.156 14:13:08 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:08:41.156 14:13:08 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:08:41.156 14:13:08 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:08:41.156 14:13:08 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:08:41.156 14:13:08 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:08:41.156 14:13:08 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:41.156 14:13:08 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:41.156 14:13:08 -- common/autotest_common.sh@10 -- # set +x 00:08:41.156 ************************************ 00:08:41.156 START TEST spdk_dd 00:08:41.156 ************************************ 00:08:41.156 14:13:08 spdk_dd -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:08:41.156 * Looking for test storage... 00:08:41.156 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:41.156 14:13:08 spdk_dd -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:41.156 14:13:08 spdk_dd -- common/autotest_common.sh@1691 -- # lcov --version 00:08:41.156 14:13:08 spdk_dd -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:41.156 14:13:08 spdk_dd -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:41.156 14:13:08 spdk_dd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:41.156 14:13:08 spdk_dd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:41.156 14:13:08 spdk_dd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:41.156 14:13:08 spdk_dd -- scripts/common.sh@336 -- # IFS=.-: 00:08:41.156 14:13:08 spdk_dd -- scripts/common.sh@336 -- # read -ra ver1 00:08:41.156 14:13:08 spdk_dd -- scripts/common.sh@337 -- # IFS=.-: 00:08:41.156 14:13:08 spdk_dd -- scripts/common.sh@337 -- # read -ra ver2 00:08:41.156 14:13:08 spdk_dd -- scripts/common.sh@338 -- # local 'op=<' 00:08:41.156 14:13:08 spdk_dd -- scripts/common.sh@340 -- # ver1_l=2 00:08:41.156 14:13:08 spdk_dd -- scripts/common.sh@341 -- # ver2_l=1 00:08:41.156 14:13:08 spdk_dd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:41.156 14:13:08 spdk_dd -- scripts/common.sh@344 -- # case "$op" in 00:08:41.156 14:13:08 spdk_dd -- scripts/common.sh@345 -- # : 1 00:08:41.156 14:13:08 spdk_dd -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:41.156 14:13:08 spdk_dd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:41.156 14:13:08 spdk_dd -- scripts/common.sh@365 -- # decimal 1 00:08:41.156 14:13:08 spdk_dd -- scripts/common.sh@353 -- # local d=1 00:08:41.156 14:13:08 spdk_dd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:41.156 14:13:08 spdk_dd -- scripts/common.sh@355 -- # echo 1 00:08:41.415 14:13:08 spdk_dd -- scripts/common.sh@365 -- # ver1[v]=1 00:08:41.415 14:13:08 spdk_dd -- scripts/common.sh@366 -- # decimal 2 00:08:41.415 14:13:08 spdk_dd -- scripts/common.sh@353 -- # local d=2 00:08:41.415 14:13:08 spdk_dd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:41.415 14:13:08 spdk_dd -- scripts/common.sh@355 -- # echo 2 00:08:41.415 14:13:08 spdk_dd -- scripts/common.sh@366 -- # ver2[v]=2 00:08:41.415 14:13:08 spdk_dd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:41.415 14:13:08 spdk_dd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:41.415 14:13:08 spdk_dd -- scripts/common.sh@368 -- # return 0 00:08:41.415 14:13:08 spdk_dd -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:41.415 14:13:08 spdk_dd -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:41.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.415 --rc genhtml_branch_coverage=1 00:08:41.415 --rc genhtml_function_coverage=1 00:08:41.415 --rc genhtml_legend=1 00:08:41.415 --rc geninfo_all_blocks=1 00:08:41.415 --rc geninfo_unexecuted_blocks=1 00:08:41.415 00:08:41.415 ' 00:08:41.415 14:13:08 spdk_dd -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:41.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.415 --rc genhtml_branch_coverage=1 00:08:41.415 --rc genhtml_function_coverage=1 00:08:41.415 --rc genhtml_legend=1 00:08:41.415 --rc geninfo_all_blocks=1 00:08:41.415 --rc geninfo_unexecuted_blocks=1 00:08:41.415 00:08:41.415 ' 00:08:41.415 14:13:08 spdk_dd -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:41.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.415 --rc genhtml_branch_coverage=1 00:08:41.415 --rc genhtml_function_coverage=1 00:08:41.415 --rc genhtml_legend=1 00:08:41.415 --rc geninfo_all_blocks=1 00:08:41.415 --rc geninfo_unexecuted_blocks=1 00:08:41.415 00:08:41.415 ' 00:08:41.415 14:13:08 spdk_dd -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:41.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.415 --rc genhtml_branch_coverage=1 00:08:41.415 --rc genhtml_function_coverage=1 00:08:41.415 --rc genhtml_legend=1 00:08:41.415 --rc geninfo_all_blocks=1 00:08:41.415 --rc geninfo_unexecuted_blocks=1 00:08:41.415 00:08:41.415 ' 00:08:41.415 14:13:08 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:41.415 14:13:08 spdk_dd -- scripts/common.sh@15 -- # shopt -s extglob 00:08:41.415 14:13:08 spdk_dd -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:41.415 14:13:08 spdk_dd -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:41.415 14:13:08 spdk_dd -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:41.415 14:13:08 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.415 14:13:08 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.415 14:13:08 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.415 14:13:08 spdk_dd -- paths/export.sh@5 -- # export PATH 00:08:41.415 14:13:08 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.415 14:13:08 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:41.984 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:41.984 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:41.984 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:41.984 14:13:09 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:08:41.984 14:13:09 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:08:41.984 14:13:09 spdk_dd -- scripts/common.sh@312 -- # local bdf bdfs 00:08:41.984 14:13:09 spdk_dd -- scripts/common.sh@313 -- # local nvmes 00:08:41.984 14:13:09 spdk_dd -- scripts/common.sh@315 -- # [[ -n '' ]] 00:08:41.984 14:13:09 spdk_dd -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:08:41.984 14:13:09 spdk_dd -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:08:41.984 14:13:09 spdk_dd -- scripts/common.sh@298 -- # local bdf= 00:08:41.984 14:13:09 spdk_dd -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:08:41.984 14:13:09 spdk_dd -- scripts/common.sh@233 -- # local class 00:08:41.984 14:13:09 spdk_dd -- scripts/common.sh@234 -- # local subclass 00:08:41.984 14:13:09 spdk_dd -- scripts/common.sh@235 -- # local progif 00:08:41.984 14:13:09 spdk_dd -- scripts/common.sh@236 -- # printf %02x 1 00:08:41.984 14:13:09 spdk_dd -- scripts/common.sh@236 -- # class=01 00:08:41.984 14:13:09 spdk_dd -- scripts/common.sh@237 -- # printf %02x 8 00:08:41.984 14:13:09 spdk_dd -- scripts/common.sh@237 -- # subclass=08 00:08:41.984 14:13:09 spdk_dd -- scripts/common.sh@238 -- # printf %02x 2 00:08:41.984 14:13:09 spdk_dd -- scripts/common.sh@238 -- # progif=02 00:08:41.984 14:13:09 spdk_dd -- scripts/common.sh@240 -- # hash lspci 00:08:41.984 14:13:09 spdk_dd -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:08:41.984 14:13:09 spdk_dd -- scripts/common.sh@242 -- # lspci -mm -n -D 00:08:41.984 14:13:09 spdk_dd -- scripts/common.sh@243 -- # grep -i -- -p02 00:08:41.984 14:13:09 spdk_dd -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:08:41.984 14:13:09 spdk_dd -- scripts/common.sh@245 -- # tr -d '"' 00:08:41.984 14:13:09 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:08:41.984 14:13:09 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:08:41.984 14:13:09 spdk_dd -- scripts/common.sh@18 -- # local i 00:08:41.984 14:13:09 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:08:41.984 14:13:09 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:41.984 14:13:09 spdk_dd -- scripts/common.sh@27 -- # return 0 00:08:41.984 14:13:09 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:08:41.984 14:13:09 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:08:41.984 14:13:09 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:08:41.984 14:13:09 spdk_dd -- scripts/common.sh@18 -- # local i 00:08:41.984 14:13:09 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:08:41.984 14:13:09 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:41.984 14:13:09 spdk_dd -- scripts/common.sh@27 -- # return 0 00:08:41.984 14:13:09 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:08:41.984 14:13:09 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:08:41.984 14:13:09 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:08:41.984 14:13:09 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:08:41.984 14:13:09 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:08:41.984 14:13:09 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:08:41.984 14:13:09 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:08:41.984 14:13:09 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:08:41.984 14:13:09 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:08:41.984 14:13:09 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:08:41.984 14:13:09 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:08:41.984 14:13:09 spdk_dd -- scripts/common.sh@328 -- # (( 2 )) 00:08:41.984 14:13:09 spdk_dd -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:08:41.984 14:13:09 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:08:41.984 14:13:09 spdk_dd -- dd/common.sh@139 -- # local lib 00:08:41.984 14:13:09 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:08:41.984 14:13:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:41.984 14:13:09 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:41.984 14:13:09 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:08:41.984 14:13:09 spdk_dd -- dd/common.sh@143 -- # [[ libasan.so.8 == liburing.so.* ]] 00:08:41.984 14:13:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:41.984 14:13:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:08:41.984 14:13:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:41.984 14:13:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:08:41.984 14:13:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:41.984 14:13:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.1 == liburing.so.* ]] 00:08:41.984 14:13:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:41.984 14:13:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:08:41.984 14:13:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:41.984 14:13:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:08:41.984 14:13:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:41.984 14:13:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:08:41.984 14:13:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:41.984 14:13:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:08:41.984 14:13:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:41.984 14:13:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:08:41.984 14:13:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:41.984 14:13:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:08:41.984 14:13:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:41.984 14:13:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:08:41.984 14:13:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:41.984 14:13:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:08:41.984 14:13:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:41.984 14:13:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:08:41.984 14:13:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:41.984 14:13:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:08:41.984 14:13:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:41.984 14:13:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.15.0 == liburing.so.* ]] 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.7.0 == liburing.so.* ]] 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.1 == liburing.so.* ]] 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.1 == liburing.so.* ]] 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfu_device.so.3.0 == liburing.so.* ]] 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scsi.so.9.0 == liburing.so.* ]] 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfu_tgt.so.3.0 == liburing.so.* ]] 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fuse_dispatcher.so.1.0 == liburing.so.* ]] 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.2.0 == liburing.so.* ]] 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev_aio.so.1.0 == liburing.so.* ]] 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev.so.2.0 == liburing.so.* ]] 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.17.0 == liburing.so.* ]] 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.5.0 == liburing.so.* ]] 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.6.0 == liburing.so.* ]] 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.11.0 == liburing.so.* ]] 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.11.0 == liburing.so.* ]] 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.2.0 == liburing.so.* ]] 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.1 == liburing.so.* ]] 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.1 == liburing.so.* ]] 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:08:41.985 14:13:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:41.986 14:13:09 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:08:41.986 14:13:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:41.986 14:13:09 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:08:41.986 14:13:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:41.986 14:13:09 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:08:41.986 14:13:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:41.986 14:13:09 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:08:41.986 14:13:09 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:41.986 14:13:09 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:08:41.986 14:13:09 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:08:41.986 * spdk_dd linked to liburing 00:08:41.986 14:13:09 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:08:41.986 14:13:09 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:08:41.986 14:13:09 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:41.986 14:13:09 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:08:41.986 14:13:09 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:41.986 14:13:09 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:41.986 14:13:09 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:08:41.986 14:13:09 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:41.986 14:13:09 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:41.986 14:13:09 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:41.986 14:13:09 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:41.986 14:13:09 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:41.986 14:13:09 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:41.986 14:13:09 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:41.986 14:13:09 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:41.986 14:13:09 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:41.986 14:13:09 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:41.986 14:13:09 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:41.986 14:13:09 spdk_dd -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:08:41.986 14:13:09 spdk_dd -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:08:41.986 14:13:09 spdk_dd -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:41.986 14:13:09 spdk_dd -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:08:41.986 14:13:09 spdk_dd -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:08:41.986 14:13:09 spdk_dd -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:08:41.986 14:13:09 spdk_dd -- common/build_config.sh@23 -- # CONFIG_CET=n 00:08:41.986 14:13:09 spdk_dd -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:41.986 14:13:09 spdk_dd -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:08:41.986 14:13:09 spdk_dd -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:08:41.986 14:13:09 spdk_dd -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:08:41.986 14:13:09 spdk_dd -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:41.986 14:13:09 spdk_dd -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:41.986 14:13:09 spdk_dd -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:08:41.986 14:13:09 spdk_dd -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:08:41.986 14:13:09 spdk_dd -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:08:41.986 14:13:09 spdk_dd -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:08:41.986 14:13:09 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:08:41.986 14:13:09 spdk_dd -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:08:41.986 14:13:09 spdk_dd -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:08:41.986 14:13:09 spdk_dd -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:08:41.986 14:13:09 spdk_dd -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:08:41.986 14:13:09 spdk_dd -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:08:41.986 14:13:09 spdk_dd -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:08:41.986 14:13:09 spdk_dd -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:08:41.986 14:13:09 spdk_dd -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:08:41.986 14:13:09 spdk_dd -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:08:41.986 14:13:09 spdk_dd -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:08:41.986 14:13:09 spdk_dd -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:08:41.986 14:13:09 spdk_dd -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:08:41.986 14:13:09 spdk_dd -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:41.986 14:13:09 spdk_dd -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:08:41.986 14:13:09 spdk_dd -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:08:41.986 14:13:09 spdk_dd -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:08:41.986 14:13:09 spdk_dd -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:08:41.986 14:13:09 spdk_dd -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:08:41.986 14:13:09 spdk_dd -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:08:41.986 14:13:09 spdk_dd -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:41.986 14:13:09 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:08:41.986 14:13:09 spdk_dd -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:08:41.986 14:13:09 spdk_dd -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:08:41.986 14:13:09 spdk_dd -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:08:41.986 14:13:09 spdk_dd -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:08:41.986 14:13:09 spdk_dd -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=y 00:08:41.986 14:13:09 spdk_dd -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:08:41.986 14:13:09 spdk_dd -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:08:41.986 14:13:09 spdk_dd -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:08:41.986 14:13:09 spdk_dd -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:08:41.986 14:13:09 spdk_dd -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:08:41.986 14:13:09 spdk_dd -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:08:41.986 14:13:09 spdk_dd -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:08:41.986 14:13:09 spdk_dd -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:08:41.986 14:13:09 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:08:41.986 14:13:09 spdk_dd -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:08:41.986 14:13:09 spdk_dd -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:08:41.986 14:13:09 spdk_dd -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:08:41.986 14:13:09 spdk_dd -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:08:41.986 14:13:09 spdk_dd -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:08:41.986 14:13:09 spdk_dd -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:41.986 14:13:09 spdk_dd -- common/build_config.sh@76 -- # CONFIG_FC=n 00:08:41.986 14:13:09 spdk_dd -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:08:41.986 14:13:09 spdk_dd -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:08:41.986 14:13:09 spdk_dd -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:08:41.986 14:13:09 spdk_dd -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:08:41.986 14:13:09 spdk_dd -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:08:41.986 14:13:09 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:08:41.986 14:13:09 spdk_dd -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:08:41.986 14:13:09 spdk_dd -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:08:41.986 14:13:09 spdk_dd -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:08:41.986 14:13:09 spdk_dd -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:08:41.986 14:13:09 spdk_dd -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:41.986 14:13:09 spdk_dd -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:08:41.986 14:13:09 spdk_dd -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:08:41.986 14:13:09 spdk_dd -- common/build_config.sh@90 -- # CONFIG_URING=y 00:08:41.986 14:13:09 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:08:41.986 14:13:09 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:08:41.986 14:13:09 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:08:41.986 14:13:09 spdk_dd -- dd/common.sh@153 -- # return 0 00:08:41.986 14:13:09 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:08:41.986 14:13:09 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:08:41.986 14:13:09 spdk_dd -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:08:41.986 14:13:09 spdk_dd -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:41.986 14:13:09 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:41.986 ************************************ 00:08:41.986 START TEST spdk_dd_basic_rw 00:08:41.986 ************************************ 00:08:41.986 14:13:09 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:08:42.246 * Looking for test storage... 00:08:42.246 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:42.246 14:13:09 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:42.246 14:13:09 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1691 -- # lcov --version 00:08:42.246 14:13:09 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:42.246 14:13:09 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:42.246 14:13:09 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:42.246 14:13:09 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:42.246 14:13:09 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:42.246 14:13:09 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # IFS=.-: 00:08:42.246 14:13:09 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # read -ra ver1 00:08:42.246 14:13:09 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # IFS=.-: 00:08:42.246 14:13:09 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # read -ra ver2 00:08:42.246 14:13:09 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@338 -- # local 'op=<' 00:08:42.246 14:13:09 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@340 -- # ver1_l=2 00:08:42.246 14:13:09 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@341 -- # ver2_l=1 00:08:42.246 14:13:09 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:42.246 14:13:09 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@344 -- # case "$op" in 00:08:42.246 14:13:09 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@345 -- # : 1 00:08:42.246 14:13:09 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:42.246 14:13:09 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:42.246 14:13:09 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # decimal 1 00:08:42.246 14:13:09 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=1 00:08:42.246 14:13:09 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:42.246 14:13:09 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 1 00:08:42.246 14:13:09 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # ver1[v]=1 00:08:42.246 14:13:09 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # decimal 2 00:08:42.246 14:13:09 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=2 00:08:42.246 14:13:09 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:42.246 14:13:09 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 2 00:08:42.246 14:13:09 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # ver2[v]=2 00:08:42.246 14:13:09 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:42.246 14:13:09 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:42.246 14:13:09 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # return 0 00:08:42.246 14:13:09 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:42.246 14:13:09 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:42.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.247 --rc genhtml_branch_coverage=1 00:08:42.247 --rc genhtml_function_coverage=1 00:08:42.247 --rc genhtml_legend=1 00:08:42.247 --rc geninfo_all_blocks=1 00:08:42.247 --rc geninfo_unexecuted_blocks=1 00:08:42.247 00:08:42.247 ' 00:08:42.247 14:13:09 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:42.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.247 --rc genhtml_branch_coverage=1 00:08:42.247 --rc genhtml_function_coverage=1 00:08:42.247 --rc genhtml_legend=1 00:08:42.247 --rc geninfo_all_blocks=1 00:08:42.247 --rc geninfo_unexecuted_blocks=1 00:08:42.247 00:08:42.247 ' 00:08:42.247 14:13:09 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:42.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.247 --rc genhtml_branch_coverage=1 00:08:42.247 --rc genhtml_function_coverage=1 00:08:42.247 --rc genhtml_legend=1 00:08:42.247 --rc geninfo_all_blocks=1 00:08:42.247 --rc geninfo_unexecuted_blocks=1 00:08:42.247 00:08:42.247 ' 00:08:42.247 14:13:09 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:42.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.247 --rc genhtml_branch_coverage=1 00:08:42.247 --rc genhtml_function_coverage=1 00:08:42.247 --rc genhtml_legend=1 00:08:42.247 --rc geninfo_all_blocks=1 00:08:42.247 --rc geninfo_unexecuted_blocks=1 00:08:42.247 00:08:42.247 ' 00:08:42.247 14:13:09 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:42.247 14:13:09 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@15 -- # shopt -s extglob 00:08:42.247 14:13:09 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:42.247 14:13:09 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:42.247 14:13:09 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:42.247 14:13:09 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.247 14:13:09 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.247 14:13:09 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.247 14:13:09 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:08:42.247 14:13:09 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.247 14:13:09 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:08:42.247 14:13:09 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:08:42.247 14:13:09 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:08:42.247 14:13:09 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:08:42.247 14:13:09 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:08:42.247 14:13:09 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:08:42.247 14:13:09 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:08:42.247 14:13:09 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:42.247 14:13:09 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:42.247 14:13:09 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:08:42.247 14:13:09 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:08:42.247 14:13:09 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:08:42.247 14:13:09 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:08:42.507 14:13:10 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:08:42.507 14:13:10 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:08:42.508 14:13:10 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:08:42.508 14:13:10 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:08:42.508 14:13:10 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:08:42.508 14:13:10 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:08:42.508 14:13:10 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:08:42.508 14:13:10 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:08:42.508 14:13:10 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:08:42.508 14:13:10 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:08:42.508 14:13:10 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:42.508 14:13:10 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:42.508 14:13:10 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:08:42.508 14:13:10 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:08:42.508 ************************************ 00:08:42.508 START TEST dd_bs_lt_native_bs 00:08:42.508 ************************************ 00:08:42.508 14:13:10 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1127 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:08:42.508 14:13:10 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@650 -- # local es=0 00:08:42.508 14:13:10 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:08:42.508 14:13:10 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:42.508 14:13:10 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:42.508 14:13:10 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:42.508 14:13:10 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:42.508 14:13:10 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:42.508 14:13:10 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:42.509 14:13:10 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:42.509 14:13:10 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:42.509 14:13:10 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:08:42.766 { 00:08:42.766 "subsystems": [ 00:08:42.766 { 00:08:42.766 "subsystem": "bdev", 00:08:42.766 "config": [ 00:08:42.766 { 00:08:42.766 "params": { 00:08:42.766 "trtype": "pcie", 00:08:42.766 "traddr": "0000:00:10.0", 00:08:42.766 "name": "Nvme0" 00:08:42.766 }, 00:08:42.766 "method": "bdev_nvme_attach_controller" 00:08:42.766 }, 00:08:42.766 { 00:08:42.766 "method": "bdev_wait_for_examine" 00:08:42.766 } 00:08:42.766 ] 00:08:42.766 } 00:08:42.766 ] 00:08:42.766 } 00:08:42.766 [2024-11-06 14:13:10.240359] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:08:42.766 [2024-11-06 14:13:10.240522] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61243 ] 00:08:43.024 [2024-11-06 14:13:10.424793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.024 [2024-11-06 14:13:10.558750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.282 [2024-11-06 14:13:10.778861] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:43.539 [2024-11-06 14:13:10.977726] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:08:43.539 [2024-11-06 14:13:10.977818] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:44.475 [2024-11-06 14:13:11.741769] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:44.475 14:13:12 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@653 -- # es=234 00:08:44.475 14:13:12 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:44.475 14:13:12 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@662 -- # es=106 00:08:44.475 14:13:12 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # case "$es" in 00:08:44.475 14:13:12 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@670 -- # es=1 00:08:44.475 14:13:12 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:44.475 00:08:44.475 real 0m1.899s 00:08:44.475 user 0m1.532s 00:08:44.475 sys 0m0.312s 00:08:44.475 14:13:12 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:44.475 14:13:12 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:08:44.475 ************************************ 00:08:44.475 END TEST dd_bs_lt_native_bs 00:08:44.475 ************************************ 00:08:44.475 14:13:12 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:08:44.475 14:13:12 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:44.475 14:13:12 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:44.475 14:13:12 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:08:44.475 ************************************ 00:08:44.475 START TEST dd_rw 00:08:44.475 ************************************ 00:08:44.475 14:13:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1127 -- # basic_rw 4096 00:08:44.475 14:13:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:08:44.475 14:13:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:08:44.475 14:13:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:08:44.475 14:13:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:08:44.475 14:13:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:08:44.475 14:13:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:08:44.475 14:13:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:08:44.475 14:13:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:08:44.475 14:13:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:08:44.475 14:13:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:08:44.475 14:13:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:08:44.475 14:13:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:44.475 14:13:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:08:44.475 14:13:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:08:44.475 14:13:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:08:44.475 14:13:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:08:44.475 14:13:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:08:44.475 14:13:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:45.043 14:13:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:08:45.043 14:13:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:08:45.043 14:13:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:45.043 14:13:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:45.302 { 00:08:45.302 "subsystems": [ 00:08:45.302 { 00:08:45.302 "subsystem": "bdev", 00:08:45.302 "config": [ 00:08:45.302 { 00:08:45.302 "params": { 00:08:45.302 "trtype": "pcie", 00:08:45.302 "traddr": "0000:00:10.0", 00:08:45.302 "name": "Nvme0" 00:08:45.302 }, 00:08:45.302 "method": "bdev_nvme_attach_controller" 00:08:45.302 }, 00:08:45.302 { 00:08:45.302 "method": "bdev_wait_for_examine" 00:08:45.302 } 00:08:45.302 ] 00:08:45.302 } 00:08:45.302 ] 00:08:45.302 } 00:08:45.302 [2024-11-06 14:13:12.746254] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:08:45.302 [2024-11-06 14:13:12.746429] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61287 ] 00:08:45.562 [2024-11-06 14:13:12.938899] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.562 [2024-11-06 14:13:13.071759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.820 [2024-11-06 14:13:13.297991] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:46.078  [2024-11-06T14:13:15.089Z] Copying: 60/60 [kB] (average 29 MBps) 00:08:47.454 00:08:47.454 14:13:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:08:47.454 14:13:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:47.455 14:13:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:47.455 14:13:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:47.455 { 00:08:47.455 "subsystems": [ 00:08:47.455 { 00:08:47.455 "subsystem": "bdev", 00:08:47.455 "config": [ 00:08:47.455 { 00:08:47.455 "params": { 00:08:47.455 "trtype": "pcie", 00:08:47.455 "traddr": "0000:00:10.0", 00:08:47.455 "name": "Nvme0" 00:08:47.455 }, 00:08:47.455 "method": "bdev_nvme_attach_controller" 00:08:47.455 }, 00:08:47.455 { 00:08:47.455 "method": "bdev_wait_for_examine" 00:08:47.455 } 00:08:47.455 ] 00:08:47.455 } 00:08:47.455 ] 00:08:47.455 } 00:08:47.455 [2024-11-06 14:13:14.794061] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:08:47.455 [2024-11-06 14:13:14.794208] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61318 ] 00:08:47.455 [2024-11-06 14:13:14.984288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.713 [2024-11-06 14:13:15.115983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.713 [2024-11-06 14:13:15.338919] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:48.010  [2024-11-06T14:13:16.580Z] Copying: 60/60 [kB] (average 19 MBps) 00:08:48.945 00:08:49.203 14:13:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:49.203 14:13:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:08:49.203 14:13:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:49.203 14:13:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:49.203 14:13:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:08:49.203 14:13:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:49.203 14:13:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:49.203 14:13:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:49.203 14:13:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:49.203 14:13:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:49.203 14:13:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:49.203 { 00:08:49.203 "subsystems": [ 00:08:49.203 { 00:08:49.203 "subsystem": "bdev", 00:08:49.203 "config": [ 00:08:49.203 { 00:08:49.203 "params": { 00:08:49.203 "trtype": "pcie", 00:08:49.203 "traddr": "0000:00:10.0", 00:08:49.203 "name": "Nvme0" 00:08:49.203 }, 00:08:49.203 "method": "bdev_nvme_attach_controller" 00:08:49.203 }, 00:08:49.203 { 00:08:49.203 "method": "bdev_wait_for_examine" 00:08:49.203 } 00:08:49.203 ] 00:08:49.203 } 00:08:49.203 ] 00:08:49.203 } 00:08:49.203 [2024-11-06 14:13:16.706114] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:08:49.203 [2024-11-06 14:13:16.706252] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61351 ] 00:08:49.461 [2024-11-06 14:13:16.893886] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.461 [2024-11-06 14:13:17.030808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.720 [2024-11-06 14:13:17.250615] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:49.978  [2024-11-06T14:13:18.989Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:51.354 00:08:51.354 14:13:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:51.354 14:13:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:08:51.354 14:13:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:08:51.354 14:13:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:08:51.354 14:13:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:08:51.354 14:13:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:08:51.354 14:13:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:51.611 14:13:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:08:51.612 14:13:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:08:51.612 14:13:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:51.612 14:13:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:51.612 { 00:08:51.612 "subsystems": [ 00:08:51.612 { 00:08:51.612 "subsystem": "bdev", 00:08:51.612 "config": [ 00:08:51.612 { 00:08:51.612 "params": { 00:08:51.612 "trtype": "pcie", 00:08:51.612 "traddr": "0000:00:10.0", 00:08:51.612 "name": "Nvme0" 00:08:51.612 }, 00:08:51.612 "method": "bdev_nvme_attach_controller" 00:08:51.612 }, 00:08:51.612 { 00:08:51.612 "method": "bdev_wait_for_examine" 00:08:51.612 } 00:08:51.612 ] 00:08:51.612 } 00:08:51.612 ] 00:08:51.612 } 00:08:51.612 [2024-11-06 14:13:19.242409] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:08:51.612 [2024-11-06 14:13:19.242773] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61382 ] 00:08:51.868 [2024-11-06 14:13:19.428614] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.125 [2024-11-06 14:13:19.556561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.383 [2024-11-06 14:13:19.780973] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:52.383  [2024-11-06T14:13:21.395Z] Copying: 60/60 [kB] (average 58 MBps) 00:08:53.760 00:08:53.760 14:13:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:08:53.760 14:13:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:53.760 14:13:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:53.760 14:13:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:53.760 { 00:08:53.760 "subsystems": [ 00:08:53.760 { 00:08:53.760 "subsystem": "bdev", 00:08:53.760 "config": [ 00:08:53.760 { 00:08:53.760 "params": { 00:08:53.760 "trtype": "pcie", 00:08:53.760 "traddr": "0000:00:10.0", 00:08:53.760 "name": "Nvme0" 00:08:53.760 }, 00:08:53.760 "method": "bdev_nvme_attach_controller" 00:08:53.760 }, 00:08:53.760 { 00:08:53.760 "method": "bdev_wait_for_examine" 00:08:53.760 } 00:08:53.760 ] 00:08:53.760 } 00:08:53.760 ] 00:08:53.760 } 00:08:53.760 [2024-11-06 14:13:21.114722] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:08:53.760 [2024-11-06 14:13:21.114884] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61413 ] 00:08:53.760 [2024-11-06 14:13:21.301141] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.019 [2024-11-06 14:13:21.420829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.019 [2024-11-06 14:13:21.643855] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:54.278  [2024-11-06T14:13:23.301Z] Copying: 60/60 [kB] (average 58 MBps) 00:08:55.666 00:08:55.666 14:13:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:55.666 14:13:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:08:55.666 14:13:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:55.666 14:13:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:55.666 14:13:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:08:55.666 14:13:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:55.666 14:13:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:55.666 14:13:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:55.666 14:13:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:55.666 14:13:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:55.666 14:13:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:55.666 { 00:08:55.666 "subsystems": [ 00:08:55.666 { 00:08:55.666 "subsystem": "bdev", 00:08:55.666 "config": [ 00:08:55.666 { 00:08:55.666 "params": { 00:08:55.666 "trtype": "pcie", 00:08:55.666 "traddr": "0000:00:10.0", 00:08:55.666 "name": "Nvme0" 00:08:55.666 }, 00:08:55.666 "method": "bdev_nvme_attach_controller" 00:08:55.666 }, 00:08:55.666 { 00:08:55.666 "method": "bdev_wait_for_examine" 00:08:55.666 } 00:08:55.666 ] 00:08:55.666 } 00:08:55.666 ] 00:08:55.666 } 00:08:55.666 [2024-11-06 14:13:23.124389] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:08:55.666 [2024-11-06 14:13:23.124542] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61446 ] 00:08:55.936 [2024-11-06 14:13:23.311772] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.936 [2024-11-06 14:13:23.436554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.196 [2024-11-06 14:13:23.656978] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:56.454  [2024-11-06T14:13:25.027Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:08:57.392 00:08:57.392 14:13:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:08:57.392 14:13:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:57.392 14:13:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:08:57.392 14:13:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:08:57.392 14:13:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:08:57.392 14:13:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:08:57.392 14:13:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:08:57.392 14:13:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:57.959 14:13:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:08:57.959 14:13:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:08:57.959 14:13:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:57.959 14:13:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:57.959 { 00:08:57.959 "subsystems": [ 00:08:57.959 { 00:08:57.959 "subsystem": "bdev", 00:08:57.959 "config": [ 00:08:57.959 { 00:08:57.959 "params": { 00:08:57.959 "trtype": "pcie", 00:08:57.959 "traddr": "0000:00:10.0", 00:08:57.959 "name": "Nvme0" 00:08:57.959 }, 00:08:57.959 "method": "bdev_nvme_attach_controller" 00:08:57.959 }, 00:08:57.959 { 00:08:57.959 "method": "bdev_wait_for_examine" 00:08:57.959 } 00:08:57.959 ] 00:08:57.959 } 00:08:57.959 ] 00:08:57.959 } 00:08:57.959 [2024-11-06 14:13:25.575939] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:08:57.959 [2024-11-06 14:13:25.576114] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61477 ] 00:08:58.217 [2024-11-06 14:13:25.748578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.475 [2024-11-06 14:13:25.909177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.734 [2024-11-06 14:13:26.130727] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:58.734  [2024-11-06T14:13:27.748Z] Copying: 56/56 [kB] (average 54 MBps) 00:09:00.113 00:09:00.113 14:13:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:09:00.113 14:13:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:09:00.113 14:13:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:09:00.113 14:13:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:00.113 { 00:09:00.113 "subsystems": [ 00:09:00.113 { 00:09:00.113 "subsystem": "bdev", 00:09:00.113 "config": [ 00:09:00.113 { 00:09:00.113 "params": { 00:09:00.113 "trtype": "pcie", 00:09:00.113 "traddr": "0000:00:10.0", 00:09:00.113 "name": "Nvme0" 00:09:00.113 }, 00:09:00.113 "method": "bdev_nvme_attach_controller" 00:09:00.113 }, 00:09:00.113 { 00:09:00.113 "method": "bdev_wait_for_examine" 00:09:00.113 } 00:09:00.113 ] 00:09:00.113 } 00:09:00.113 ] 00:09:00.113 } 00:09:00.113 [2024-11-06 14:13:27.691202] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:09:00.113 [2024-11-06 14:13:27.691374] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61508 ] 00:09:00.371 [2024-11-06 14:13:27.878315] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.629 [2024-11-06 14:13:28.011047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.629 [2024-11-06 14:13:28.243228] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:00.889  [2024-11-06T14:13:29.897Z] Copying: 56/56 [kB] (average 27 MBps) 00:09:02.262 00:09:02.262 14:13:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:02.262 14:13:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:09:02.262 14:13:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:09:02.262 14:13:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:09:02.262 14:13:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:09:02.262 14:13:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:09:02.262 14:13:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:09:02.262 14:13:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:09:02.262 14:13:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:09:02.262 14:13:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:09:02.262 14:13:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:02.262 { 00:09:02.262 "subsystems": [ 00:09:02.262 { 00:09:02.262 "subsystem": "bdev", 00:09:02.262 "config": [ 00:09:02.262 { 00:09:02.262 "params": { 00:09:02.262 "trtype": "pcie", 00:09:02.262 "traddr": "0000:00:10.0", 00:09:02.262 "name": "Nvme0" 00:09:02.262 }, 00:09:02.262 "method": "bdev_nvme_attach_controller" 00:09:02.262 }, 00:09:02.262 { 00:09:02.262 "method": "bdev_wait_for_examine" 00:09:02.262 } 00:09:02.262 ] 00:09:02.262 } 00:09:02.262 ] 00:09:02.262 } 00:09:02.262 [2024-11-06 14:13:29.786042] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:09:02.262 [2024-11-06 14:13:29.786608] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61541 ] 00:09:02.521 [2024-11-06 14:13:29.977049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.521 [2024-11-06 14:13:30.110288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.779 [2024-11-06 14:13:30.342241] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:03.038  [2024-11-06T14:13:32.075Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:09:04.440 00:09:04.440 14:13:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:09:04.440 14:13:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:09:04.440 14:13:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:09:04.440 14:13:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:09:04.440 14:13:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:09:04.440 14:13:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:09:04.440 14:13:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:04.699 14:13:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:09:04.699 14:13:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:09:04.699 14:13:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:09:04.699 14:13:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:04.699 { 00:09:04.699 "subsystems": [ 00:09:04.699 { 00:09:04.699 "subsystem": "bdev", 00:09:04.699 "config": [ 00:09:04.699 { 00:09:04.699 "params": { 00:09:04.699 "trtype": "pcie", 00:09:04.699 "traddr": "0000:00:10.0", 00:09:04.699 "name": "Nvme0" 00:09:04.699 }, 00:09:04.699 "method": "bdev_nvme_attach_controller" 00:09:04.699 }, 00:09:04.699 { 00:09:04.699 "method": "bdev_wait_for_examine" 00:09:04.699 } 00:09:04.699 ] 00:09:04.699 } 00:09:04.699 ] 00:09:04.699 } 00:09:04.957 [2024-11-06 14:13:32.385364] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:09:04.957 [2024-11-06 14:13:32.385794] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61583 ] 00:09:04.957 [2024-11-06 14:13:32.573154] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:05.216 [2024-11-06 14:13:32.701083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.475 [2024-11-06 14:13:32.928796] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:05.734  [2024-11-06T14:13:34.321Z] Copying: 56/56 [kB] (average 54 MBps) 00:09:06.686 00:09:06.686 14:13:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:09:06.686 14:13:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:09:06.686 14:13:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:09:06.686 14:13:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:06.953 { 00:09:06.953 "subsystems": [ 00:09:06.953 { 00:09:06.953 "subsystem": "bdev", 00:09:06.953 "config": [ 00:09:06.953 { 00:09:06.953 "params": { 00:09:06.953 "trtype": "pcie", 00:09:06.953 "traddr": "0000:00:10.0", 00:09:06.953 "name": "Nvme0" 00:09:06.953 }, 00:09:06.953 "method": "bdev_nvme_attach_controller" 00:09:06.953 }, 00:09:06.953 { 00:09:06.953 "method": "bdev_wait_for_examine" 00:09:06.953 } 00:09:06.953 ] 00:09:06.953 } 00:09:06.953 ] 00:09:06.953 } 00:09:06.953 [2024-11-06 14:13:34.432237] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:09:06.953 [2024-11-06 14:13:34.432623] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61613 ] 00:09:07.221 [2024-11-06 14:13:34.626905] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.221 [2024-11-06 14:13:34.769302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.491 [2024-11-06 14:13:35.025924] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:07.753  [2024-11-06T14:13:36.763Z] Copying: 56/56 [kB] (average 54 MBps) 00:09:09.128 00:09:09.128 14:13:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:09.128 14:13:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:09:09.128 14:13:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:09:09.128 14:13:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:09:09.128 14:13:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:09:09.128 14:13:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:09:09.128 14:13:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:09:09.128 14:13:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:09:09.128 14:13:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:09:09.128 14:13:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:09:09.128 14:13:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:09.128 { 00:09:09.128 "subsystems": [ 00:09:09.128 { 00:09:09.128 "subsystem": "bdev", 00:09:09.128 "config": [ 00:09:09.128 { 00:09:09.128 "params": { 00:09:09.128 "trtype": "pcie", 00:09:09.128 "traddr": "0000:00:10.0", 00:09:09.128 "name": "Nvme0" 00:09:09.128 }, 00:09:09.128 "method": "bdev_nvme_attach_controller" 00:09:09.128 }, 00:09:09.128 { 00:09:09.128 "method": "bdev_wait_for_examine" 00:09:09.128 } 00:09:09.128 ] 00:09:09.128 } 00:09:09.128 ] 00:09:09.128 } 00:09:09.128 [2024-11-06 14:13:36.589242] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:09:09.128 [2024-11-06 14:13:36.589549] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61647 ] 00:09:09.386 [2024-11-06 14:13:36.772012] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.386 [2024-11-06 14:13:36.897411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.644 [2024-11-06 14:13:37.122381] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:09.902  [2024-11-06T14:13:38.469Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:09:10.834 00:09:10.834 14:13:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:09:10.834 14:13:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:09:10.834 14:13:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:09:10.834 14:13:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:09:10.834 14:13:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:09:10.834 14:13:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:09:10.834 14:13:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:09:10.834 14:13:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:11.401 14:13:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:09:11.401 14:13:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:09:11.401 14:13:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:09:11.401 14:13:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:11.401 { 00:09:11.401 "subsystems": [ 00:09:11.401 { 00:09:11.401 "subsystem": "bdev", 00:09:11.401 "config": [ 00:09:11.401 { 00:09:11.401 "params": { 00:09:11.401 "trtype": "pcie", 00:09:11.401 "traddr": "0000:00:10.0", 00:09:11.401 "name": "Nvme0" 00:09:11.401 }, 00:09:11.401 "method": "bdev_nvme_attach_controller" 00:09:11.401 }, 00:09:11.401 { 00:09:11.401 "method": "bdev_wait_for_examine" 00:09:11.401 } 00:09:11.401 ] 00:09:11.401 } 00:09:11.401 ] 00:09:11.401 } 00:09:11.401 [2024-11-06 14:13:38.962191] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:09:11.401 [2024-11-06 14:13:38.962396] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61678 ] 00:09:11.662 [2024-11-06 14:13:39.219082] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.919 [2024-11-06 14:13:39.349728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.178 [2024-11-06 14:13:39.571351] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:12.178  [2024-11-06T14:13:41.188Z] Copying: 48/48 [kB] (average 46 MBps) 00:09:13.553 00:09:13.553 14:13:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:09:13.553 14:13:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:09:13.553 14:13:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:09:13.553 14:13:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:13.553 { 00:09:13.553 "subsystems": [ 00:09:13.553 { 00:09:13.553 "subsystem": "bdev", 00:09:13.553 "config": [ 00:09:13.553 { 00:09:13.553 "params": { 00:09:13.553 "trtype": "pcie", 00:09:13.553 "traddr": "0000:00:10.0", 00:09:13.553 "name": "Nvme0" 00:09:13.553 }, 00:09:13.553 "method": "bdev_nvme_attach_controller" 00:09:13.553 }, 00:09:13.553 { 00:09:13.553 "method": "bdev_wait_for_examine" 00:09:13.553 } 00:09:13.553 ] 00:09:13.553 } 00:09:13.553 ] 00:09:13.553 } 00:09:13.553 [2024-11-06 14:13:41.146784] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:09:13.553 [2024-11-06 14:13:41.146956] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61714 ] 00:09:13.812 [2024-11-06 14:13:41.335456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.070 [2024-11-06 14:13:41.467153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.070 [2024-11-06 14:13:41.700055] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:14.329  [2024-11-06T14:13:43.339Z] Copying: 48/48 [kB] (average 23 MBps) 00:09:15.704 00:09:15.704 14:13:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:15.704 14:13:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:09:15.704 14:13:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:09:15.704 14:13:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:09:15.704 14:13:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:09:15.704 14:13:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:09:15.704 14:13:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:09:15.704 14:13:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:09:15.704 14:13:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:09:15.704 14:13:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:09:15.704 14:13:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:15.704 { 00:09:15.704 "subsystems": [ 00:09:15.704 { 00:09:15.704 "subsystem": "bdev", 00:09:15.704 "config": [ 00:09:15.704 { 00:09:15.704 "params": { 00:09:15.704 "trtype": "pcie", 00:09:15.704 "traddr": "0000:00:10.0", 00:09:15.704 "name": "Nvme0" 00:09:15.705 }, 00:09:15.705 "method": "bdev_nvme_attach_controller" 00:09:15.705 }, 00:09:15.705 { 00:09:15.705 "method": "bdev_wait_for_examine" 00:09:15.705 } 00:09:15.705 ] 00:09:15.705 } 00:09:15.705 ] 00:09:15.705 } 00:09:15.705 [2024-11-06 14:13:43.102414] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:09:15.705 [2024-11-06 14:13:43.102582] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61742 ] 00:09:15.705 [2024-11-06 14:13:43.295002] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.963 [2024-11-06 14:13:43.417114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.222 [2024-11-06 14:13:43.632533] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:16.222  [2024-11-06T14:13:45.234Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:09:17.599 00:09:17.599 14:13:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:09:17.599 14:13:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:09:17.599 14:13:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:09:17.599 14:13:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:09:17.599 14:13:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:09:17.599 14:13:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:09:17.599 14:13:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:17.858 14:13:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:09:17.858 14:13:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:09:17.858 14:13:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:09:17.858 14:13:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:18.117 { 00:09:18.117 "subsystems": [ 00:09:18.117 { 00:09:18.117 "subsystem": "bdev", 00:09:18.117 "config": [ 00:09:18.117 { 00:09:18.117 "params": { 00:09:18.117 "trtype": "pcie", 00:09:18.117 "traddr": "0000:00:10.0", 00:09:18.117 "name": "Nvme0" 00:09:18.117 }, 00:09:18.117 "method": "bdev_nvme_attach_controller" 00:09:18.117 }, 00:09:18.117 { 00:09:18.117 "method": "bdev_wait_for_examine" 00:09:18.117 } 00:09:18.117 ] 00:09:18.117 } 00:09:18.117 ] 00:09:18.117 } 00:09:18.117 [2024-11-06 14:13:45.581954] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:09:18.117 [2024-11-06 14:13:45.582302] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61779 ] 00:09:18.376 [2024-11-06 14:13:45.771585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:18.376 [2024-11-06 14:13:45.896119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.634 [2024-11-06 14:13:46.123200] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:18.892  [2024-11-06T14:13:47.464Z] Copying: 48/48 [kB] (average 46 MBps) 00:09:19.829 00:09:19.829 14:13:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:09:19.829 14:13:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:09:19.829 14:13:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:09:19.829 14:13:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:19.829 { 00:09:19.829 "subsystems": [ 00:09:19.829 { 00:09:19.829 "subsystem": "bdev", 00:09:19.829 "config": [ 00:09:19.829 { 00:09:19.829 "params": { 00:09:19.829 "trtype": "pcie", 00:09:19.829 "traddr": "0000:00:10.0", 00:09:19.829 "name": "Nvme0" 00:09:19.829 }, 00:09:19.829 "method": "bdev_nvme_attach_controller" 00:09:19.829 }, 00:09:19.829 { 00:09:19.829 "method": "bdev_wait_for_examine" 00:09:19.829 } 00:09:19.829 ] 00:09:19.829 } 00:09:19.829 ] 00:09:19.829 } 00:09:20.088 [2024-11-06 14:13:47.507883] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:09:20.088 [2024-11-06 14:13:47.508093] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61810 ] 00:09:20.088 [2024-11-06 14:13:47.692770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.346 [2024-11-06 14:13:47.823618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.603 [2024-11-06 14:13:48.049761] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:20.862  [2024-11-06T14:13:49.446Z] Copying: 48/48 [kB] (average 46 MBps) 00:09:21.811 00:09:22.073 14:13:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:22.073 14:13:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:09:22.073 14:13:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:09:22.073 14:13:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:09:22.073 14:13:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:09:22.073 14:13:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:09:22.073 14:13:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:09:22.073 14:13:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:09:22.073 14:13:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:09:22.073 14:13:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:09:22.073 14:13:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:22.073 { 00:09:22.073 "subsystems": [ 00:09:22.073 { 00:09:22.073 "subsystem": "bdev", 00:09:22.073 "config": [ 00:09:22.073 { 00:09:22.073 "params": { 00:09:22.073 "trtype": "pcie", 00:09:22.073 "traddr": "0000:00:10.0", 00:09:22.073 "name": "Nvme0" 00:09:22.073 }, 00:09:22.073 "method": "bdev_nvme_attach_controller" 00:09:22.073 }, 00:09:22.073 { 00:09:22.073 "method": "bdev_wait_for_examine" 00:09:22.073 } 00:09:22.073 ] 00:09:22.073 } 00:09:22.073 ] 00:09:22.073 } 00:09:22.073 [2024-11-06 14:13:49.592068] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:09:22.073 [2024-11-06 14:13:49.592223] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61843 ] 00:09:22.332 [2024-11-06 14:13:49.781007] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.332 [2024-11-06 14:13:49.910390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.591 [2024-11-06 14:13:50.144385] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:22.850  [2024-11-06T14:13:51.861Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:09:24.226 00:09:24.226 00:09:24.226 real 0m39.368s 00:09:24.226 user 0m32.683s 00:09:24.226 sys 0m21.163s 00:09:24.226 ************************************ 00:09:24.226 END TEST dd_rw 00:09:24.226 ************************************ 00:09:24.226 14:13:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:24.226 14:13:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:24.226 14:13:51 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:09:24.226 14:13:51 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:24.226 14:13:51 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:24.226 14:13:51 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:09:24.226 ************************************ 00:09:24.226 START TEST dd_rw_offset 00:09:24.226 ************************************ 00:09:24.226 14:13:51 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1127 -- # basic_offset 00:09:24.226 14:13:51 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:09:24.226 14:13:51 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:09:24.226 14:13:51 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:09:24.226 14:13:51 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:09:24.226 14:13:51 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:09:24.226 14:13:51 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=613utwl6fnrd3zcq6wtiqxpkxtwxz0jr2yafyux0ubw48o049u3h06lzdkkzgfyq9ueizi9abflqng8gkjdyejezjdv0so2g0y12xu8h8oig24ht5eruen6wirlp8vgg2wrs0nmf42gtnld15csfgmywa1c2xjxix5g6vdmtbvwpl0p0nj1zdr8n1cnm3iheocbkc68cg7s1jm8o30q1x4sdjpyhirnlewakmy7net1wtl9neb9fxmugxbkbzve3mk6yjhweky67jqfay5rl6kzsege50xngbs77zkho292374v6jq5vss42yannsszbtgz3rbbc021t78bham8gnza2w8jzak2cgdc00psm9ybqd4d2m3x9oy58l06f46qpmwfoqpzozl8ibraoq5pqu92e0z9vv1i28nrzzsvyowkkv5k5izmn6dsg7zxfutc2stmkgs4utbzo4gnqjp8xsbu7ukab2uolhkaspl2mtl1a2lf7u1zn67c21mceildqzr8cd99f7c7xda5byrho6updop8sx2jhg5cgof9nggmg3w73j03fv5v2m92t7ck1kd1wixb1yhe8w44dpgm7k0zd3eqr0yfbx3rer0c2mge8kw27llk1l1f8sewlrpc4je003o7wqkxr2y7vilo79hx79nvwsq8u6020ufont3hv5caiemnm0j6bfqo4eveycz6e5u6d00hxi0umbjzenrahwbk2hvxi0f1nkywjgckdq7123dsc80viyl1ep8mgeet3b0lxvw48wydsz86mnjj5qpq5amqwh1mejautp09cbbzgcyxaamblih7oogqym4znp6i4lr2zihbw5hbqz5ggxjjvgyqghtiq6pfp7o4lqram7icx9a47iyj59es4qura65ea93anbhwnph713zi44ufx9vdxwzlk2qbxsq9sabgi4e135m9bu2il6amkjwrin0kbsf7i8492w6v1sthqcmi5rz0f30xj3nbso7e0j7ux0v8s6avqmh4kknpll7nsgcb07q0azsbykz9ue8cntvagsa2i9elcjj248a4bhiy1q6cdpgicw8qjgm8k77p3kz5n9cpt2rlp76peq8z4ufhtrk7i9my6ltvcoz7heqgvfervfcdlkv516fn583ubiq41dvjsskfzrclu7cxs42il846wyngq4tpz3pjvl80l8bbe172esk0nlculdvkkbyxkcsme7vaieqnfmf14x1nas17csztpyrpxza3ri7cq5a9ps8s7lcy3x9k2v1p2tkwyjxfukhty0115he3m8j3nd8bq2g781gmrfcw1dga18k2vlz6b8i6zoo0zq41stlspt4c1hevsqmw514qjpor77pyvnypiwka9flli2ld43rv27tt2med1rkylpmownqfsxavvbce9oagsmnch40n5i0qmtqk284q9ovs4q7vbqr08341ev60l2t37nvnd758o7mvyizk385fj4xy2zdzxm8ka0yn41lx2w7hibrg9s9m6pk2h7umwzodha76bbg01g43jvdq6ttpl0yefupdsmy3fy3g2pgxdlko1krlpwxgngipxyezu37c7oyk9473liu1gax2275wt2njd0vfgqqbgiiwowy8d8ks0xukswztg0a64jpibygug5lzli6n02v77suynwz88kef0fm03ux6egqixq2ti7fzo5dafpoudcmwh857uku66cart3ct4p0jdipkc86f28z988dai5kiwjg3ccyhr6pdyv1xan0r6e2lw0l96gibds3syzr6kixbux8bnik005ia5p798g1kzv0tnzq0adqg27vtvd14q0msqaygvdkzfzze0udrpuqwsrqwp0231efi0n0yjl7i9mdlgowdjaayitmd4mmo4i6hnakc0ebdrv6k0jjfu0rfjxlol0773d8hsft9iubf7c9141qd744ou76akhdrjaqljweergtqbxzn5akm0wfada4nm1a68c6ous5ljhi91okgtd4jkbwz7twdqkj2dwj09s1zl1frjiakc34jthsuf2ka869ma4nmhzs6von5l0xfg3von996a6qex9uefurwq07pmo21q3ehqo1aisxyckbv65pf7klcnptcd0fiymkphbopnpcd0oga5mtvpmk8z5ita5b0fcmz23m4ixzx6b9159wo76bx6hbqarr9i6henbjrym9phg06neo1ayd7luyujx4oujgut79j7rath6dc5fj4rrh0teapuayj9rm8m412wahy7l9j557wh4efgz26jgr3livkr35twjzwgpbf19hemdkgovr4838r40rrpspnibgk4i3h1fw13ioop1okblysg7zxnuqe7sj0xt8dvw4a36qndpasiunvhvt4t4o2urxnxynw5dchrjt9zlbof0xydi1obl8tm76hib1irgy6fjeaks1l1aq83y19ul54oxy282j7bzs2fedzgge5fbo0atc5nj4lfz31lc4pfjsd0aox09nb3wtrgyx01nihjgbrgjqtmm8rgxubz7gktzkrzba4q6dkmby4fceqr0hzvl7dbk0ks6n1w8ab7ljg6y28fftpme57hzygj2g98vznzr84vxhc2ia65725y0vwilmlnxmtpdpeohzn4v52oe22mytvp3pi6nxbol7csbbobqj43dy1ubw2rlykr1hzvqcxxn8mji9si4ppzqb7s351o1w8zhbgapk6c86lzkwj9g3wewziomd62euts50088tdonlykywpt39mynz2t2q3akv17s9v2qiu7hsb6q34jlfab121r4wtsplmceebd7fbr5768d02ml1phdnn9c58ahlbekewp58q5c837f3ub4vy3qwxjm2zw24lugwgn6o6hh0kdbigc9op1fxq53mbexrxdo0tdnvfdepx2jwjv2l78s7jwejj3tqsob1w58bibdge992mhkev71r4elk0r6y8oml2voqkttzpzh57uqhjejba9qmdeq5ggbgz02w7m953tcmir15g8z2d3athraf2dxu1mfct19t5boebui2c5xwao0wyx6zkztngly4bekxdeltvbdvab4v8wzdqppzbjpppvvukvjzdvmu82qb6q6xx5rw8oi2jepdyvczy6mpu3xeu051a3fwrwwwf9pm97vv4j777afu42yg93atvsuwhxfn11bgmexsaz2f4o12egd9ci91dxkx5ksw11f2n2sdcu1bii6taffmot6nam0tn7y3qvjh39v6ior7ggzmeeevoo2u2zua1vhz0umxlo9vblajdgxc7salz6dpfvqv5cxvpfhsf8zuvorhxmlrdpq5t50ti81gp33iw5pdeybwz4pzgx3c4mxy2lcytob1w7uvf56jvp66aslaf4yoqu1lfky9c1czdl662t7myhedwx1br7nr1r0cw1xdns5kxcqaopwa9b3jse047pb27piadcsctca7fdaum3ewoqs2qggpzc9ibvdzv9vmdj75e4bw5zp9e4vb50jx635h3f1bwu4g2p74z3qm5mp51pczcrwkbrikqzajoiqpvluvstq5mt44rbl15g7dzkvpnefotm9r82vzoelfeff5wndwdbeharg3lcyr3z3btno1qo3jyyj933jklorj1wqeismcuoqev5zyt0pso440c16nwlhgzufafwvu6zgcweliezj0fhwyptceimn2zmbtuhi1mdk2altfte1hoha3vzn9ksxsibine6dd36qfznb6pj4313t6efl7onxpwvdxyk58o3bj5nfz1ymuebh85faerpt9r0ebzbkg8y4q8fovmmttv0b85h8y8c3vszrci4yll2xfbpepoyn79uk3ida0j2iaisqbh4l75eisdhvga0lfuswrotx3kqaonrb917s08bcpoiu6k3z2gvqtaysfh2ifjtbfdn1ucz95lhoqt13d2ryhhpm5o7zwq6s9jg3xk36z9sebs4u8faq96o888puds90jke6u5jthik9bx5bqovgkqc5bx8nxziirk7th9bzdviv66q78tx4l7us4bfi59jbrdwh832 00:09:24.226 14:13:51 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:09:24.226 14:13:51 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:09:24.226 14:13:51 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:09:24.226 14:13:51 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:09:24.226 { 00:09:24.226 "subsystems": [ 00:09:24.226 { 00:09:24.226 "subsystem": "bdev", 00:09:24.226 "config": [ 00:09:24.226 { 00:09:24.226 "params": { 00:09:24.226 "trtype": "pcie", 00:09:24.226 "traddr": "0000:00:10.0", 00:09:24.226 "name": "Nvme0" 00:09:24.226 }, 00:09:24.226 "method": "bdev_nvme_attach_controller" 00:09:24.226 }, 00:09:24.226 { 00:09:24.226 "method": "bdev_wait_for_examine" 00:09:24.226 } 00:09:24.226 ] 00:09:24.226 } 00:09:24.226 ] 00:09:24.226 } 00:09:24.226 [2024-11-06 14:13:51.705275] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:09:24.226 [2024-11-06 14:13:51.705433] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61891 ] 00:09:24.484 [2024-11-06 14:13:51.896062] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.484 [2024-11-06 14:13:52.027930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.756 [2024-11-06 14:13:52.261680] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:25.016  [2024-11-06T14:13:54.024Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:09:26.389 00:09:26.389 14:13:53 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:09:26.389 14:13:53 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:09:26.389 14:13:53 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:09:26.389 14:13:53 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:09:26.389 { 00:09:26.389 "subsystems": [ 00:09:26.389 { 00:09:26.389 "subsystem": "bdev", 00:09:26.389 "config": [ 00:09:26.389 { 00:09:26.389 "params": { 00:09:26.389 "trtype": "pcie", 00:09:26.389 "traddr": "0000:00:10.0", 00:09:26.389 "name": "Nvme0" 00:09:26.389 }, 00:09:26.389 "method": "bdev_nvme_attach_controller" 00:09:26.389 }, 00:09:26.389 { 00:09:26.389 "method": "bdev_wait_for_examine" 00:09:26.389 } 00:09:26.389 ] 00:09:26.389 } 00:09:26.389 ] 00:09:26.389 } 00:09:26.389 [2024-11-06 14:13:53.835194] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:09:26.389 [2024-11-06 14:13:53.835350] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61922 ] 00:09:26.648 [2024-11-06 14:13:54.027814] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.648 [2024-11-06 14:13:54.161831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.906 [2024-11-06 14:13:54.396844] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:27.164  [2024-11-06T14:13:56.176Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:09:28.541 00:09:28.541 14:13:55 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:09:28.541 ************************************ 00:09:28.541 END TEST dd_rw_offset 00:09:28.541 ************************************ 00:09:28.542 14:13:55 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ 613utwl6fnrd3zcq6wtiqxpkxtwxz0jr2yafyux0ubw48o049u3h06lzdkkzgfyq9ueizi9abflqng8gkjdyejezjdv0so2g0y12xu8h8oig24ht5eruen6wirlp8vgg2wrs0nmf42gtnld15csfgmywa1c2xjxix5g6vdmtbvwpl0p0nj1zdr8n1cnm3iheocbkc68cg7s1jm8o30q1x4sdjpyhirnlewakmy7net1wtl9neb9fxmugxbkbzve3mk6yjhweky67jqfay5rl6kzsege50xngbs77zkho292374v6jq5vss42yannsszbtgz3rbbc021t78bham8gnza2w8jzak2cgdc00psm9ybqd4d2m3x9oy58l06f46qpmwfoqpzozl8ibraoq5pqu92e0z9vv1i28nrzzsvyowkkv5k5izmn6dsg7zxfutc2stmkgs4utbzo4gnqjp8xsbu7ukab2uolhkaspl2mtl1a2lf7u1zn67c21mceildqzr8cd99f7c7xda5byrho6updop8sx2jhg5cgof9nggmg3w73j03fv5v2m92t7ck1kd1wixb1yhe8w44dpgm7k0zd3eqr0yfbx3rer0c2mge8kw27llk1l1f8sewlrpc4je003o7wqkxr2y7vilo79hx79nvwsq8u6020ufont3hv5caiemnm0j6bfqo4eveycz6e5u6d00hxi0umbjzenrahwbk2hvxi0f1nkywjgckdq7123dsc80viyl1ep8mgeet3b0lxvw48wydsz86mnjj5qpq5amqwh1mejautp09cbbzgcyxaamblih7oogqym4znp6i4lr2zihbw5hbqz5ggxjjvgyqghtiq6pfp7o4lqram7icx9a47iyj59es4qura65ea93anbhwnph713zi44ufx9vdxwzlk2qbxsq9sabgi4e135m9bu2il6amkjwrin0kbsf7i8492w6v1sthqcmi5rz0f30xj3nbso7e0j7ux0v8s6avqmh4kknpll7nsgcb07q0azsbykz9ue8cntvagsa2i9elcjj248a4bhiy1q6cdpgicw8qjgm8k77p3kz5n9cpt2rlp76peq8z4ufhtrk7i9my6ltvcoz7heqgvfervfcdlkv516fn583ubiq41dvjsskfzrclu7cxs42il846wyngq4tpz3pjvl80l8bbe172esk0nlculdvkkbyxkcsme7vaieqnfmf14x1nas17csztpyrpxza3ri7cq5a9ps8s7lcy3x9k2v1p2tkwyjxfukhty0115he3m8j3nd8bq2g781gmrfcw1dga18k2vlz6b8i6zoo0zq41stlspt4c1hevsqmw514qjpor77pyvnypiwka9flli2ld43rv27tt2med1rkylpmownqfsxavvbce9oagsmnch40n5i0qmtqk284q9ovs4q7vbqr08341ev60l2t37nvnd758o7mvyizk385fj4xy2zdzxm8ka0yn41lx2w7hibrg9s9m6pk2h7umwzodha76bbg01g43jvdq6ttpl0yefupdsmy3fy3g2pgxdlko1krlpwxgngipxyezu37c7oyk9473liu1gax2275wt2njd0vfgqqbgiiwowy8d8ks0xukswztg0a64jpibygug5lzli6n02v77suynwz88kef0fm03ux6egqixq2ti7fzo5dafpoudcmwh857uku66cart3ct4p0jdipkc86f28z988dai5kiwjg3ccyhr6pdyv1xan0r6e2lw0l96gibds3syzr6kixbux8bnik005ia5p798g1kzv0tnzq0adqg27vtvd14q0msqaygvdkzfzze0udrpuqwsrqwp0231efi0n0yjl7i9mdlgowdjaayitmd4mmo4i6hnakc0ebdrv6k0jjfu0rfjxlol0773d8hsft9iubf7c9141qd744ou76akhdrjaqljweergtqbxzn5akm0wfada4nm1a68c6ous5ljhi91okgtd4jkbwz7twdqkj2dwj09s1zl1frjiakc34jthsuf2ka869ma4nmhzs6von5l0xfg3von996a6qex9uefurwq07pmo21q3ehqo1aisxyckbv65pf7klcnptcd0fiymkphbopnpcd0oga5mtvpmk8z5ita5b0fcmz23m4ixzx6b9159wo76bx6hbqarr9i6henbjrym9phg06neo1ayd7luyujx4oujgut79j7rath6dc5fj4rrh0teapuayj9rm8m412wahy7l9j557wh4efgz26jgr3livkr35twjzwgpbf19hemdkgovr4838r40rrpspnibgk4i3h1fw13ioop1okblysg7zxnuqe7sj0xt8dvw4a36qndpasiunvhvt4t4o2urxnxynw5dchrjt9zlbof0xydi1obl8tm76hib1irgy6fjeaks1l1aq83y19ul54oxy282j7bzs2fedzgge5fbo0atc5nj4lfz31lc4pfjsd0aox09nb3wtrgyx01nihjgbrgjqtmm8rgxubz7gktzkrzba4q6dkmby4fceqr0hzvl7dbk0ks6n1w8ab7ljg6y28fftpme57hzygj2g98vznzr84vxhc2ia65725y0vwilmlnxmtpdpeohzn4v52oe22mytvp3pi6nxbol7csbbobqj43dy1ubw2rlykr1hzvqcxxn8mji9si4ppzqb7s351o1w8zhbgapk6c86lzkwj9g3wewziomd62euts50088tdonlykywpt39mynz2t2q3akv17s9v2qiu7hsb6q34jlfab121r4wtsplmceebd7fbr5768d02ml1phdnn9c58ahlbekewp58q5c837f3ub4vy3qwxjm2zw24lugwgn6o6hh0kdbigc9op1fxq53mbexrxdo0tdnvfdepx2jwjv2l78s7jwejj3tqsob1w58bibdge992mhkev71r4elk0r6y8oml2voqkttzpzh57uqhjejba9qmdeq5ggbgz02w7m953tcmir15g8z2d3athraf2dxu1mfct19t5boebui2c5xwao0wyx6zkztngly4bekxdeltvbdvab4v8wzdqppzbjpppvvukvjzdvmu82qb6q6xx5rw8oi2jepdyvczy6mpu3xeu051a3fwrwwwf9pm97vv4j777afu42yg93atvsuwhxfn11bgmexsaz2f4o12egd9ci91dxkx5ksw11f2n2sdcu1bii6taffmot6nam0tn7y3qvjh39v6ior7ggzmeeevoo2u2zua1vhz0umxlo9vblajdgxc7salz6dpfvqv5cxvpfhsf8zuvorhxmlrdpq5t50ti81gp33iw5pdeybwz4pzgx3c4mxy2lcytob1w7uvf56jvp66aslaf4yoqu1lfky9c1czdl662t7myhedwx1br7nr1r0cw1xdns5kxcqaopwa9b3jse047pb27piadcsctca7fdaum3ewoqs2qggpzc9ibvdzv9vmdj75e4bw5zp9e4vb50jx635h3f1bwu4g2p74z3qm5mp51pczcrwkbrikqzajoiqpvluvstq5mt44rbl15g7dzkvpnefotm9r82vzoelfeff5wndwdbeharg3lcyr3z3btno1qo3jyyj933jklorj1wqeismcuoqev5zyt0pso440c16nwlhgzufafwvu6zgcweliezj0fhwyptceimn2zmbtuhi1mdk2altfte1hoha3vzn9ksxsibine6dd36qfznb6pj4313t6efl7onxpwvdxyk58o3bj5nfz1ymuebh85faerpt9r0ebzbkg8y4q8fovmmttv0b85h8y8c3vszrci4yll2xfbpepoyn79uk3ida0j2iaisqbh4l75eisdhvga0lfuswrotx3kqaonrb917s08bcpoiu6k3z2gvqtaysfh2ifjtbfdn1ucz95lhoqt13d2ryhhpm5o7zwq6s9jg3xk36z9sebs4u8faq96o888puds90jke6u5jthik9bx5bqovgkqc5bx8nxziirk7th9bzdviv66q78tx4l7us4bfi59jbrdwh832 == \6\1\3\u\t\w\l\6\f\n\r\d\3\z\c\q\6\w\t\i\q\x\p\k\x\t\w\x\z\0\j\r\2\y\a\f\y\u\x\0\u\b\w\4\8\o\0\4\9\u\3\h\0\6\l\z\d\k\k\z\g\f\y\q\9\u\e\i\z\i\9\a\b\f\l\q\n\g\8\g\k\j\d\y\e\j\e\z\j\d\v\0\s\o\2\g\0\y\1\2\x\u\8\h\8\o\i\g\2\4\h\t\5\e\r\u\e\n\6\w\i\r\l\p\8\v\g\g\2\w\r\s\0\n\m\f\4\2\g\t\n\l\d\1\5\c\s\f\g\m\y\w\a\1\c\2\x\j\x\i\x\5\g\6\v\d\m\t\b\v\w\p\l\0\p\0\n\j\1\z\d\r\8\n\1\c\n\m\3\i\h\e\o\c\b\k\c\6\8\c\g\7\s\1\j\m\8\o\3\0\q\1\x\4\s\d\j\p\y\h\i\r\n\l\e\w\a\k\m\y\7\n\e\t\1\w\t\l\9\n\e\b\9\f\x\m\u\g\x\b\k\b\z\v\e\3\m\k\6\y\j\h\w\e\k\y\6\7\j\q\f\a\y\5\r\l\6\k\z\s\e\g\e\5\0\x\n\g\b\s\7\7\z\k\h\o\2\9\2\3\7\4\v\6\j\q\5\v\s\s\4\2\y\a\n\n\s\s\z\b\t\g\z\3\r\b\b\c\0\2\1\t\7\8\b\h\a\m\8\g\n\z\a\2\w\8\j\z\a\k\2\c\g\d\c\0\0\p\s\m\9\y\b\q\d\4\d\2\m\3\x\9\o\y\5\8\l\0\6\f\4\6\q\p\m\w\f\o\q\p\z\o\z\l\8\i\b\r\a\o\q\5\p\q\u\9\2\e\0\z\9\v\v\1\i\2\8\n\r\z\z\s\v\y\o\w\k\k\v\5\k\5\i\z\m\n\6\d\s\g\7\z\x\f\u\t\c\2\s\t\m\k\g\s\4\u\t\b\z\o\4\g\n\q\j\p\8\x\s\b\u\7\u\k\a\b\2\u\o\l\h\k\a\s\p\l\2\m\t\l\1\a\2\l\f\7\u\1\z\n\6\7\c\2\1\m\c\e\i\l\d\q\z\r\8\c\d\9\9\f\7\c\7\x\d\a\5\b\y\r\h\o\6\u\p\d\o\p\8\s\x\2\j\h\g\5\c\g\o\f\9\n\g\g\m\g\3\w\7\3\j\0\3\f\v\5\v\2\m\9\2\t\7\c\k\1\k\d\1\w\i\x\b\1\y\h\e\8\w\4\4\d\p\g\m\7\k\0\z\d\3\e\q\r\0\y\f\b\x\3\r\e\r\0\c\2\m\g\e\8\k\w\2\7\l\l\k\1\l\1\f\8\s\e\w\l\r\p\c\4\j\e\0\0\3\o\7\w\q\k\x\r\2\y\7\v\i\l\o\7\9\h\x\7\9\n\v\w\s\q\8\u\6\0\2\0\u\f\o\n\t\3\h\v\5\c\a\i\e\m\n\m\0\j\6\b\f\q\o\4\e\v\e\y\c\z\6\e\5\u\6\d\0\0\h\x\i\0\u\m\b\j\z\e\n\r\a\h\w\b\k\2\h\v\x\i\0\f\1\n\k\y\w\j\g\c\k\d\q\7\1\2\3\d\s\c\8\0\v\i\y\l\1\e\p\8\m\g\e\e\t\3\b\0\l\x\v\w\4\8\w\y\d\s\z\8\6\m\n\j\j\5\q\p\q\5\a\m\q\w\h\1\m\e\j\a\u\t\p\0\9\c\b\b\z\g\c\y\x\a\a\m\b\l\i\h\7\o\o\g\q\y\m\4\z\n\p\6\i\4\l\r\2\z\i\h\b\w\5\h\b\q\z\5\g\g\x\j\j\v\g\y\q\g\h\t\i\q\6\p\f\p\7\o\4\l\q\r\a\m\7\i\c\x\9\a\4\7\i\y\j\5\9\e\s\4\q\u\r\a\6\5\e\a\9\3\a\n\b\h\w\n\p\h\7\1\3\z\i\4\4\u\f\x\9\v\d\x\w\z\l\k\2\q\b\x\s\q\9\s\a\b\g\i\4\e\1\3\5\m\9\b\u\2\i\l\6\a\m\k\j\w\r\i\n\0\k\b\s\f\7\i\8\4\9\2\w\6\v\1\s\t\h\q\c\m\i\5\r\z\0\f\3\0\x\j\3\n\b\s\o\7\e\0\j\7\u\x\0\v\8\s\6\a\v\q\m\h\4\k\k\n\p\l\l\7\n\s\g\c\b\0\7\q\0\a\z\s\b\y\k\z\9\u\e\8\c\n\t\v\a\g\s\a\2\i\9\e\l\c\j\j\2\4\8\a\4\b\h\i\y\1\q\6\c\d\p\g\i\c\w\8\q\j\g\m\8\k\7\7\p\3\k\z\5\n\9\c\p\t\2\r\l\p\7\6\p\e\q\8\z\4\u\f\h\t\r\k\7\i\9\m\y\6\l\t\v\c\o\z\7\h\e\q\g\v\f\e\r\v\f\c\d\l\k\v\5\1\6\f\n\5\8\3\u\b\i\q\4\1\d\v\j\s\s\k\f\z\r\c\l\u\7\c\x\s\4\2\i\l\8\4\6\w\y\n\g\q\4\t\p\z\3\p\j\v\l\8\0\l\8\b\b\e\1\7\2\e\s\k\0\n\l\c\u\l\d\v\k\k\b\y\x\k\c\s\m\e\7\v\a\i\e\q\n\f\m\f\1\4\x\1\n\a\s\1\7\c\s\z\t\p\y\r\p\x\z\a\3\r\i\7\c\q\5\a\9\p\s\8\s\7\l\c\y\3\x\9\k\2\v\1\p\2\t\k\w\y\j\x\f\u\k\h\t\y\0\1\1\5\h\e\3\m\8\j\3\n\d\8\b\q\2\g\7\8\1\g\m\r\f\c\w\1\d\g\a\1\8\k\2\v\l\z\6\b\8\i\6\z\o\o\0\z\q\4\1\s\t\l\s\p\t\4\c\1\h\e\v\s\q\m\w\5\1\4\q\j\p\o\r\7\7\p\y\v\n\y\p\i\w\k\a\9\f\l\l\i\2\l\d\4\3\r\v\2\7\t\t\2\m\e\d\1\r\k\y\l\p\m\o\w\n\q\f\s\x\a\v\v\b\c\e\9\o\a\g\s\m\n\c\h\4\0\n\5\i\0\q\m\t\q\k\2\8\4\q\9\o\v\s\4\q\7\v\b\q\r\0\8\3\4\1\e\v\6\0\l\2\t\3\7\n\v\n\d\7\5\8\o\7\m\v\y\i\z\k\3\8\5\f\j\4\x\y\2\z\d\z\x\m\8\k\a\0\y\n\4\1\l\x\2\w\7\h\i\b\r\g\9\s\9\m\6\p\k\2\h\7\u\m\w\z\o\d\h\a\7\6\b\b\g\0\1\g\4\3\j\v\d\q\6\t\t\p\l\0\y\e\f\u\p\d\s\m\y\3\f\y\3\g\2\p\g\x\d\l\k\o\1\k\r\l\p\w\x\g\n\g\i\p\x\y\e\z\u\3\7\c\7\o\y\k\9\4\7\3\l\i\u\1\g\a\x\2\2\7\5\w\t\2\n\j\d\0\v\f\g\q\q\b\g\i\i\w\o\w\y\8\d\8\k\s\0\x\u\k\s\w\z\t\g\0\a\6\4\j\p\i\b\y\g\u\g\5\l\z\l\i\6\n\0\2\v\7\7\s\u\y\n\w\z\8\8\k\e\f\0\f\m\0\3\u\x\6\e\g\q\i\x\q\2\t\i\7\f\z\o\5\d\a\f\p\o\u\d\c\m\w\h\8\5\7\u\k\u\6\6\c\a\r\t\3\c\t\4\p\0\j\d\i\p\k\c\8\6\f\2\8\z\9\8\8\d\a\i\5\k\i\w\j\g\3\c\c\y\h\r\6\p\d\y\v\1\x\a\n\0\r\6\e\2\l\w\0\l\9\6\g\i\b\d\s\3\s\y\z\r\6\k\i\x\b\u\x\8\b\n\i\k\0\0\5\i\a\5\p\7\9\8\g\1\k\z\v\0\t\n\z\q\0\a\d\q\g\2\7\v\t\v\d\1\4\q\0\m\s\q\a\y\g\v\d\k\z\f\z\z\e\0\u\d\r\p\u\q\w\s\r\q\w\p\0\2\3\1\e\f\i\0\n\0\y\j\l\7\i\9\m\d\l\g\o\w\d\j\a\a\y\i\t\m\d\4\m\m\o\4\i\6\h\n\a\k\c\0\e\b\d\r\v\6\k\0\j\j\f\u\0\r\f\j\x\l\o\l\0\7\7\3\d\8\h\s\f\t\9\i\u\b\f\7\c\9\1\4\1\q\d\7\4\4\o\u\7\6\a\k\h\d\r\j\a\q\l\j\w\e\e\r\g\t\q\b\x\z\n\5\a\k\m\0\w\f\a\d\a\4\n\m\1\a\6\8\c\6\o\u\s\5\l\j\h\i\9\1\o\k\g\t\d\4\j\k\b\w\z\7\t\w\d\q\k\j\2\d\w\j\0\9\s\1\z\l\1\f\r\j\i\a\k\c\3\4\j\t\h\s\u\f\2\k\a\8\6\9\m\a\4\n\m\h\z\s\6\v\o\n\5\l\0\x\f\g\3\v\o\n\9\9\6\a\6\q\e\x\9\u\e\f\u\r\w\q\0\7\p\m\o\2\1\q\3\e\h\q\o\1\a\i\s\x\y\c\k\b\v\6\5\p\f\7\k\l\c\n\p\t\c\d\0\f\i\y\m\k\p\h\b\o\p\n\p\c\d\0\o\g\a\5\m\t\v\p\m\k\8\z\5\i\t\a\5\b\0\f\c\m\z\2\3\m\4\i\x\z\x\6\b\9\1\5\9\w\o\7\6\b\x\6\h\b\q\a\r\r\9\i\6\h\e\n\b\j\r\y\m\9\p\h\g\0\6\n\e\o\1\a\y\d\7\l\u\y\u\j\x\4\o\u\j\g\u\t\7\9\j\7\r\a\t\h\6\d\c\5\f\j\4\r\r\h\0\t\e\a\p\u\a\y\j\9\r\m\8\m\4\1\2\w\a\h\y\7\l\9\j\5\5\7\w\h\4\e\f\g\z\2\6\j\g\r\3\l\i\v\k\r\3\5\t\w\j\z\w\g\p\b\f\1\9\h\e\m\d\k\g\o\v\r\4\8\3\8\r\4\0\r\r\p\s\p\n\i\b\g\k\4\i\3\h\1\f\w\1\3\i\o\o\p\1\o\k\b\l\y\s\g\7\z\x\n\u\q\e\7\s\j\0\x\t\8\d\v\w\4\a\3\6\q\n\d\p\a\s\i\u\n\v\h\v\t\4\t\4\o\2\u\r\x\n\x\y\n\w\5\d\c\h\r\j\t\9\z\l\b\o\f\0\x\y\d\i\1\o\b\l\8\t\m\7\6\h\i\b\1\i\r\g\y\6\f\j\e\a\k\s\1\l\1\a\q\8\3\y\1\9\u\l\5\4\o\x\y\2\8\2\j\7\b\z\s\2\f\e\d\z\g\g\e\5\f\b\o\0\a\t\c\5\n\j\4\l\f\z\3\1\l\c\4\p\f\j\s\d\0\a\o\x\0\9\n\b\3\w\t\r\g\y\x\0\1\n\i\h\j\g\b\r\g\j\q\t\m\m\8\r\g\x\u\b\z\7\g\k\t\z\k\r\z\b\a\4\q\6\d\k\m\b\y\4\f\c\e\q\r\0\h\z\v\l\7\d\b\k\0\k\s\6\n\1\w\8\a\b\7\l\j\g\6\y\2\8\f\f\t\p\m\e\5\7\h\z\y\g\j\2\g\9\8\v\z\n\z\r\8\4\v\x\h\c\2\i\a\6\5\7\2\5\y\0\v\w\i\l\m\l\n\x\m\t\p\d\p\e\o\h\z\n\4\v\5\2\o\e\2\2\m\y\t\v\p\3\p\i\6\n\x\b\o\l\7\c\s\b\b\o\b\q\j\4\3\d\y\1\u\b\w\2\r\l\y\k\r\1\h\z\v\q\c\x\x\n\8\m\j\i\9\s\i\4\p\p\z\q\b\7\s\3\5\1\o\1\w\8\z\h\b\g\a\p\k\6\c\8\6\l\z\k\w\j\9\g\3\w\e\w\z\i\o\m\d\6\2\e\u\t\s\5\0\0\8\8\t\d\o\n\l\y\k\y\w\p\t\3\9\m\y\n\z\2\t\2\q\3\a\k\v\1\7\s\9\v\2\q\i\u\7\h\s\b\6\q\3\4\j\l\f\a\b\1\2\1\r\4\w\t\s\p\l\m\c\e\e\b\d\7\f\b\r\5\7\6\8\d\0\2\m\l\1\p\h\d\n\n\9\c\5\8\a\h\l\b\e\k\e\w\p\5\8\q\5\c\8\3\7\f\3\u\b\4\v\y\3\q\w\x\j\m\2\z\w\2\4\l\u\g\w\g\n\6\o\6\h\h\0\k\d\b\i\g\c\9\o\p\1\f\x\q\5\3\m\b\e\x\r\x\d\o\0\t\d\n\v\f\d\e\p\x\2\j\w\j\v\2\l\7\8\s\7\j\w\e\j\j\3\t\q\s\o\b\1\w\5\8\b\i\b\d\g\e\9\9\2\m\h\k\e\v\7\1\r\4\e\l\k\0\r\6\y\8\o\m\l\2\v\o\q\k\t\t\z\p\z\h\5\7\u\q\h\j\e\j\b\a\9\q\m\d\e\q\5\g\g\b\g\z\0\2\w\7\m\9\5\3\t\c\m\i\r\1\5\g\8\z\2\d\3\a\t\h\r\a\f\2\d\x\u\1\m\f\c\t\1\9\t\5\b\o\e\b\u\i\2\c\5\x\w\a\o\0\w\y\x\6\z\k\z\t\n\g\l\y\4\b\e\k\x\d\e\l\t\v\b\d\v\a\b\4\v\8\w\z\d\q\p\p\z\b\j\p\p\p\v\v\u\k\v\j\z\d\v\m\u\8\2\q\b\6\q\6\x\x\5\r\w\8\o\i\2\j\e\p\d\y\v\c\z\y\6\m\p\u\3\x\e\u\0\5\1\a\3\f\w\r\w\w\w\f\9\p\m\9\7\v\v\4\j\7\7\7\a\f\u\4\2\y\g\9\3\a\t\v\s\u\w\h\x\f\n\1\1\b\g\m\e\x\s\a\z\2\f\4\o\1\2\e\g\d\9\c\i\9\1\d\x\k\x\5\k\s\w\1\1\f\2\n\2\s\d\c\u\1\b\i\i\6\t\a\f\f\m\o\t\6\n\a\m\0\t\n\7\y\3\q\v\j\h\3\9\v\6\i\o\r\7\g\g\z\m\e\e\e\v\o\o\2\u\2\z\u\a\1\v\h\z\0\u\m\x\l\o\9\v\b\l\a\j\d\g\x\c\7\s\a\l\z\6\d\p\f\v\q\v\5\c\x\v\p\f\h\s\f\8\z\u\v\o\r\h\x\m\l\r\d\p\q\5\t\5\0\t\i\8\1\g\p\3\3\i\w\5\p\d\e\y\b\w\z\4\p\z\g\x\3\c\4\m\x\y\2\l\c\y\t\o\b\1\w\7\u\v\f\5\6\j\v\p\6\6\a\s\l\a\f\4\y\o\q\u\1\l\f\k\y\9\c\1\c\z\d\l\6\6\2\t\7\m\y\h\e\d\w\x\1\b\r\7\n\r\1\r\0\c\w\1\x\d\n\s\5\k\x\c\q\a\o\p\w\a\9\b\3\j\s\e\0\4\7\p\b\2\7\p\i\a\d\c\s\c\t\c\a\7\f\d\a\u\m\3\e\w\o\q\s\2\q\g\g\p\z\c\9\i\b\v\d\z\v\9\v\m\d\j\7\5\e\4\b\w\5\z\p\9\e\4\v\b\5\0\j\x\6\3\5\h\3\f\1\b\w\u\4\g\2\p\7\4\z\3\q\m\5\m\p\5\1\p\c\z\c\r\w\k\b\r\i\k\q\z\a\j\o\i\q\p\v\l\u\v\s\t\q\5\m\t\4\4\r\b\l\1\5\g\7\d\z\k\v\p\n\e\f\o\t\m\9\r\8\2\v\z\o\e\l\f\e\f\f\5\w\n\d\w\d\b\e\h\a\r\g\3\l\c\y\r\3\z\3\b\t\n\o\1\q\o\3\j\y\y\j\9\3\3\j\k\l\o\r\j\1\w\q\e\i\s\m\c\u\o\q\e\v\5\z\y\t\0\p\s\o\4\4\0\c\1\6\n\w\l\h\g\z\u\f\a\f\w\v\u\6\z\g\c\w\e\l\i\e\z\j\0\f\h\w\y\p\t\c\e\i\m\n\2\z\m\b\t\u\h\i\1\m\d\k\2\a\l\t\f\t\e\1\h\o\h\a\3\v\z\n\9\k\s\x\s\i\b\i\n\e\6\d\d\3\6\q\f\z\n\b\6\p\j\4\3\1\3\t\6\e\f\l\7\o\n\x\p\w\v\d\x\y\k\5\8\o\3\b\j\5\n\f\z\1\y\m\u\e\b\h\8\5\f\a\e\r\p\t\9\r\0\e\b\z\b\k\g\8\y\4\q\8\f\o\v\m\m\t\t\v\0\b\8\5\h\8\y\8\c\3\v\s\z\r\c\i\4\y\l\l\2\x\f\b\p\e\p\o\y\n\7\9\u\k\3\i\d\a\0\j\2\i\a\i\s\q\b\h\4\l\7\5\e\i\s\d\h\v\g\a\0\l\f\u\s\w\r\o\t\x\3\k\q\a\o\n\r\b\9\1\7\s\0\8\b\c\p\o\i\u\6\k\3\z\2\g\v\q\t\a\y\s\f\h\2\i\f\j\t\b\f\d\n\1\u\c\z\9\5\l\h\o\q\t\1\3\d\2\r\y\h\h\p\m\5\o\7\z\w\q\6\s\9\j\g\3\x\k\3\6\z\9\s\e\b\s\4\u\8\f\a\q\9\6\o\8\8\8\p\u\d\s\9\0\j\k\e\6\u\5\j\t\h\i\k\9\b\x\5\b\q\o\v\g\k\q\c\5\b\x\8\n\x\z\i\i\r\k\7\t\h\9\b\z\d\v\i\v\6\6\q\7\8\t\x\4\l\7\u\s\4\b\f\i\5\9\j\b\r\d\w\h\8\3\2 ]] 00:09:28.542 00:09:28.542 real 0m4.288s 00:09:28.542 user 0m3.603s 00:09:28.542 sys 0m2.438s 00:09:28.542 14:13:55 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:28.542 14:13:55 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:09:28.542 14:13:55 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:09:28.542 14:13:55 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:09:28.542 14:13:55 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:09:28.542 14:13:55 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:09:28.542 14:13:55 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:09:28.542 14:13:55 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:09:28.542 14:13:55 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:09:28.542 14:13:55 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:09:28.542 14:13:55 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:09:28.542 14:13:55 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:09:28.542 14:13:55 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:09:28.542 { 00:09:28.542 "subsystems": [ 00:09:28.542 { 00:09:28.542 "subsystem": "bdev", 00:09:28.542 "config": [ 00:09:28.542 { 00:09:28.542 "params": { 00:09:28.542 "trtype": "pcie", 00:09:28.542 "traddr": "0000:00:10.0", 00:09:28.542 "name": "Nvme0" 00:09:28.542 }, 00:09:28.542 "method": "bdev_nvme_attach_controller" 00:09:28.542 }, 00:09:28.542 { 00:09:28.542 "method": "bdev_wait_for_examine" 00:09:28.542 } 00:09:28.542 ] 00:09:28.542 } 00:09:28.542 ] 00:09:28.542 } 00:09:28.542 [2024-11-06 14:13:55.992656] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:09:28.542 [2024-11-06 14:13:55.992795] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61969 ] 00:09:28.801 [2024-11-06 14:13:56.182163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.801 [2024-11-06 14:13:56.316422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.059 [2024-11-06 14:13:56.554254] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:29.317  [2024-11-06T14:13:58.328Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:09:30.693 00:09:30.693 14:13:58 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:30.693 00:09:30.693 real 0m48.459s 00:09:30.693 user 0m39.901s 00:09:30.693 sys 0m25.538s 00:09:30.693 14:13:58 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:30.693 14:13:58 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:09:30.693 ************************************ 00:09:30.693 END TEST spdk_dd_basic_rw 00:09:30.693 ************************************ 00:09:30.693 14:13:58 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:09:30.693 14:13:58 spdk_dd -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:30.693 14:13:58 spdk_dd -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:30.693 14:13:58 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:09:30.693 ************************************ 00:09:30.693 START TEST spdk_dd_posix 00:09:30.693 ************************************ 00:09:30.693 14:13:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:09:30.693 * Looking for test storage... 00:09:30.693 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:30.693 14:13:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:30.693 14:13:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1691 -- # lcov --version 00:09:30.693 14:13:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:30.693 14:13:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:30.693 14:13:58 spdk_dd.spdk_dd_posix -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:30.693 14:13:58 spdk_dd.spdk_dd_posix -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:30.693 14:13:58 spdk_dd.spdk_dd_posix -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:30.693 14:13:58 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # IFS=.-: 00:09:30.693 14:13:58 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # read -ra ver1 00:09:30.693 14:13:58 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # IFS=.-: 00:09:30.693 14:13:58 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # read -ra ver2 00:09:30.693 14:13:58 spdk_dd.spdk_dd_posix -- scripts/common.sh@338 -- # local 'op=<' 00:09:30.694 14:13:58 spdk_dd.spdk_dd_posix -- scripts/common.sh@340 -- # ver1_l=2 00:09:30.694 14:13:58 spdk_dd.spdk_dd_posix -- scripts/common.sh@341 -- # ver2_l=1 00:09:30.694 14:13:58 spdk_dd.spdk_dd_posix -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:30.694 14:13:58 spdk_dd.spdk_dd_posix -- scripts/common.sh@344 -- # case "$op" in 00:09:30.694 14:13:58 spdk_dd.spdk_dd_posix -- scripts/common.sh@345 -- # : 1 00:09:30.694 14:13:58 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:30.694 14:13:58 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:30.694 14:13:58 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # decimal 1 00:09:30.694 14:13:58 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=1 00:09:30.694 14:13:58 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:30.694 14:13:58 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 1 00:09:30.694 14:13:58 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # ver1[v]=1 00:09:30.694 14:13:58 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # decimal 2 00:09:30.694 14:13:58 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=2 00:09:30.694 14:13:58 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:30.694 14:13:58 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 2 00:09:30.953 14:13:58 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # ver2[v]=2 00:09:30.953 14:13:58 spdk_dd.spdk_dd_posix -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:30.954 14:13:58 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:30.954 14:13:58 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # return 0 00:09:30.954 14:13:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:30.954 14:13:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:30.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.954 --rc genhtml_branch_coverage=1 00:09:30.954 --rc genhtml_function_coverage=1 00:09:30.954 --rc genhtml_legend=1 00:09:30.954 --rc geninfo_all_blocks=1 00:09:30.954 --rc geninfo_unexecuted_blocks=1 00:09:30.954 00:09:30.954 ' 00:09:30.954 14:13:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:30.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.954 --rc genhtml_branch_coverage=1 00:09:30.954 --rc genhtml_function_coverage=1 00:09:30.954 --rc genhtml_legend=1 00:09:30.954 --rc geninfo_all_blocks=1 00:09:30.954 --rc geninfo_unexecuted_blocks=1 00:09:30.954 00:09:30.954 ' 00:09:30.954 14:13:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:30.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.954 --rc genhtml_branch_coverage=1 00:09:30.954 --rc genhtml_function_coverage=1 00:09:30.954 --rc genhtml_legend=1 00:09:30.954 --rc geninfo_all_blocks=1 00:09:30.954 --rc geninfo_unexecuted_blocks=1 00:09:30.954 00:09:30.954 ' 00:09:30.954 14:13:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:30.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.954 --rc genhtml_branch_coverage=1 00:09:30.954 --rc genhtml_function_coverage=1 00:09:30.954 --rc genhtml_legend=1 00:09:30.954 --rc geninfo_all_blocks=1 00:09:30.954 --rc geninfo_unexecuted_blocks=1 00:09:30.954 00:09:30.954 ' 00:09:30.954 14:13:58 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:30.954 14:13:58 spdk_dd.spdk_dd_posix -- scripts/common.sh@15 -- # shopt -s extglob 00:09:30.954 14:13:58 spdk_dd.spdk_dd_posix -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:30.954 14:13:58 spdk_dd.spdk_dd_posix -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:30.954 14:13:58 spdk_dd.spdk_dd_posix -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:30.954 14:13:58 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.954 14:13:58 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.954 14:13:58 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.954 14:13:58 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:09:30.954 14:13:58 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.954 14:13:58 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:09:30.954 14:13:58 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:09:30.954 14:13:58 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:09:30.954 14:13:58 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:09:30.954 14:13:58 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:30.954 14:13:58 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:30.954 14:13:58 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:09:30.954 14:13:58 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:09:30.954 * First test run, liburing in use 00:09:30.954 14:13:58 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:09:30.954 14:13:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:30.954 14:13:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:30.954 14:13:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:09:30.954 ************************************ 00:09:30.954 START TEST dd_flag_append 00:09:30.954 ************************************ 00:09:30.954 14:13:58 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1127 -- # append 00:09:30.954 14:13:58 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:09:30.954 14:13:58 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:09:30.954 14:13:58 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:09:30.954 14:13:58 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:09:30.954 14:13:58 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:09:30.954 14:13:58 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=jko8nebh0hrjo1r1nbvs537csyhzkyrz 00:09:30.954 14:13:58 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:09:30.954 14:13:58 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:09:30.954 14:13:58 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:09:30.954 14:13:58 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=7es3j7tz0vt2zg4sgabni2ekfwzi6y5r 00:09:30.954 14:13:58 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s jko8nebh0hrjo1r1nbvs537csyhzkyrz 00:09:30.954 14:13:58 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s 7es3j7tz0vt2zg4sgabni2ekfwzi6y5r 00:09:30.954 14:13:58 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:09:30.954 [2024-11-06 14:13:58.486887] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:09:30.954 [2024-11-06 14:13:58.487047] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62053 ] 00:09:31.213 [2024-11-06 14:13:58.677146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.213 [2024-11-06 14:13:58.812717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.472 [2024-11-06 14:13:59.041193] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:31.730  [2024-11-06T14:14:00.738Z] Copying: 32/32 [B] (average 31 kBps) 00:09:33.103 00:09:33.103 14:14:00 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ 7es3j7tz0vt2zg4sgabni2ekfwzi6y5rjko8nebh0hrjo1r1nbvs537csyhzkyrz == \7\e\s\3\j\7\t\z\0\v\t\2\z\g\4\s\g\a\b\n\i\2\e\k\f\w\z\i\6\y\5\r\j\k\o\8\n\e\b\h\0\h\r\j\o\1\r\1\n\b\v\s\5\3\7\c\s\y\h\z\k\y\r\z ]] 00:09:33.103 00:09:33.103 real 0m2.031s 00:09:33.103 user 0m1.624s 00:09:33.103 sys 0m1.211s 00:09:33.103 14:14:00 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:33.103 ************************************ 00:09:33.103 END TEST dd_flag_append 00:09:33.103 ************************************ 00:09:33.103 14:14:00 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:09:33.103 14:14:00 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:09:33.103 14:14:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:33.103 14:14:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:33.103 14:14:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:09:33.103 ************************************ 00:09:33.103 START TEST dd_flag_directory 00:09:33.103 ************************************ 00:09:33.103 14:14:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1127 -- # directory 00:09:33.103 14:14:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:33.103 14:14:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # local es=0 00:09:33.103 14:14:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:33.104 14:14:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:33.104 14:14:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:33.104 14:14:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:33.104 14:14:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:33.104 14:14:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:33.104 14:14:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:33.104 14:14:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:33.104 14:14:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:33.104 14:14:00 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:33.104 [2024-11-06 14:14:00.575919] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:09:33.104 [2024-11-06 14:14:00.576090] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62099 ] 00:09:33.361 [2024-11-06 14:14:00.766167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.362 [2024-11-06 14:14:00.900664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.619 [2024-11-06 14:14:01.134405] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:33.877 [2024-11-06 14:14:01.266399] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:09:33.877 [2024-11-06 14:14:01.266478] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:09:33.877 [2024-11-06 14:14:01.266506] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:34.812 [2024-11-06 14:14:02.188347] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:35.070 14:14:02 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # es=236 00:09:35.070 14:14:02 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:35.070 14:14:02 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@662 -- # es=108 00:09:35.070 14:14:02 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # case "$es" in 00:09:35.070 14:14:02 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@670 -- # es=1 00:09:35.070 14:14:02 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:35.070 14:14:02 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:09:35.070 14:14:02 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # local es=0 00:09:35.070 14:14:02 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:09:35.070 14:14:02 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:35.070 14:14:02 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:35.070 14:14:02 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:35.070 14:14:02 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:35.070 14:14:02 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:35.070 14:14:02 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:35.070 14:14:02 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:35.070 14:14:02 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:35.070 14:14:02 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:09:35.070 [2024-11-06 14:14:02.593470] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:09:35.070 [2024-11-06 14:14:02.593647] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62126 ] 00:09:35.329 [2024-11-06 14:14:02.776984] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.329 [2024-11-06 14:14:02.906755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.587 [2024-11-06 14:14:03.136472] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:35.846 [2024-11-06 14:14:03.268942] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:09:35.846 [2024-11-06 14:14:03.269026] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:09:35.846 [2024-11-06 14:14:03.269053] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:36.782 [2024-11-06 14:14:04.197605] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:37.041 14:14:04 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # es=236 00:09:37.041 14:14:04 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:37.041 14:14:04 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@662 -- # es=108 00:09:37.041 14:14:04 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # case "$es" in 00:09:37.041 14:14:04 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@670 -- # es=1 00:09:37.041 14:14:04 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:37.041 00:09:37.041 real 0m4.058s 00:09:37.041 user 0m3.285s 00:09:37.041 sys 0m0.543s 00:09:37.041 14:14:04 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:37.041 ************************************ 00:09:37.041 END TEST dd_flag_directory 00:09:37.041 ************************************ 00:09:37.041 14:14:04 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:09:37.041 14:14:04 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:09:37.041 14:14:04 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:37.041 14:14:04 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:37.041 14:14:04 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:09:37.041 ************************************ 00:09:37.041 START TEST dd_flag_nofollow 00:09:37.041 ************************************ 00:09:37.041 14:14:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1127 -- # nofollow 00:09:37.041 14:14:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:09:37.041 14:14:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:09:37.041 14:14:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:09:37.041 14:14:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:09:37.041 14:14:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:37.041 14:14:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # local es=0 00:09:37.041 14:14:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:37.041 14:14:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:37.041 14:14:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:37.041 14:14:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:37.041 14:14:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:37.041 14:14:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:37.041 14:14:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:37.041 14:14:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:37.041 14:14:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:37.041 14:14:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:37.300 [2024-11-06 14:14:04.726334] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:09:37.300 [2024-11-06 14:14:04.726494] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62172 ] 00:09:37.300 [2024-11-06 14:14:04.910819] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:37.558 [2024-11-06 14:14:05.090283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.816 [2024-11-06 14:14:05.312859] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:37.816 [2024-11-06 14:14:05.441701] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:09:37.817 [2024-11-06 14:14:05.441784] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:09:37.817 [2024-11-06 14:14:05.441814] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:38.750 [2024-11-06 14:14:06.374025] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:39.317 14:14:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # es=216 00:09:39.317 14:14:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:39.317 14:14:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@662 -- # es=88 00:09:39.317 14:14:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # case "$es" in 00:09:39.317 14:14:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@670 -- # es=1 00:09:39.317 14:14:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:39.317 14:14:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:09:39.317 14:14:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # local es=0 00:09:39.317 14:14:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:09:39.317 14:14:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:39.317 14:14:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:39.317 14:14:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:39.317 14:14:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:39.317 14:14:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:39.317 14:14:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:39.317 14:14:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:39.317 14:14:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:39.317 14:14:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:09:39.317 [2024-11-06 14:14:06.793843] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:09:39.317 [2024-11-06 14:14:06.794285] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62199 ] 00:09:39.576 [2024-11-06 14:14:06.985020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.576 [2024-11-06 14:14:07.164811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.836 [2024-11-06 14:14:07.388218] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:40.095 [2024-11-06 14:14:07.515118] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:09:40.095 [2024-11-06 14:14:07.515197] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:09:40.095 [2024-11-06 14:14:07.515227] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:41.034 [2024-11-06 14:14:08.440126] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:41.292 14:14:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # es=216 00:09:41.293 14:14:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:41.293 14:14:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@662 -- # es=88 00:09:41.293 14:14:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # case "$es" in 00:09:41.293 14:14:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@670 -- # es=1 00:09:41.293 14:14:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:41.293 14:14:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:09:41.293 14:14:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:09:41.293 14:14:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:09:41.293 14:14:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:41.293 [2024-11-06 14:14:08.861509] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:09:41.293 [2024-11-06 14:14:08.861667] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62223 ] 00:09:41.551 [2024-11-06 14:14:09.049545] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.551 [2024-11-06 14:14:09.179496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.811 [2024-11-06 14:14:09.399284] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:42.070  [2024-11-06T14:14:11.081Z] Copying: 512/512 [B] (average 500 kBps) 00:09:43.446 00:09:43.446 14:14:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ ijvmbmjj66zkspybbuie1538cijpiehvsdho21014ax2jbgf240i7gy1q1591lo9pgx7v0k8a4cwpgj00rf386vhgw4d4ytcvc5afupc7h5ar4xrwzfg6tdx40ul03u8t0lyy4alev9c3ydj23npzfufq4yfvtqwy61zwl4cs2qyqdkpwji46zca6xmieb7et2ct9swabv98u8z2kswtumsf1adlmgkykxjs35j48wimeozjp27igm8vi5c8v8n48z5fseiwwp0h80lmalk6glxj35on887a5ncw9jw4ecrdtoqqqrhxorvn80essn3vj0j7t5zswwwgg8zyh2kvjyl2z2l026orogs2ibrdpni6j4hn2foug2ga6h9mx0vu4zd1plc54bgnwjqw16vrrnhcc8mrezsqbcvd5g9rro2xge8isfa0k3jt6oeeydfmd7evprvbv4f7kkj7b8re7neplnoatffjwh3ninipd4vfo2m73mi0e1c9hkbm79th == \i\j\v\m\b\m\j\j\6\6\z\k\s\p\y\b\b\u\i\e\1\5\3\8\c\i\j\p\i\e\h\v\s\d\h\o\2\1\0\1\4\a\x\2\j\b\g\f\2\4\0\i\7\g\y\1\q\1\5\9\1\l\o\9\p\g\x\7\v\0\k\8\a\4\c\w\p\g\j\0\0\r\f\3\8\6\v\h\g\w\4\d\4\y\t\c\v\c\5\a\f\u\p\c\7\h\5\a\r\4\x\r\w\z\f\g\6\t\d\x\4\0\u\l\0\3\u\8\t\0\l\y\y\4\a\l\e\v\9\c\3\y\d\j\2\3\n\p\z\f\u\f\q\4\y\f\v\t\q\w\y\6\1\z\w\l\4\c\s\2\q\y\q\d\k\p\w\j\i\4\6\z\c\a\6\x\m\i\e\b\7\e\t\2\c\t\9\s\w\a\b\v\9\8\u\8\z\2\k\s\w\t\u\m\s\f\1\a\d\l\m\g\k\y\k\x\j\s\3\5\j\4\8\w\i\m\e\o\z\j\p\2\7\i\g\m\8\v\i\5\c\8\v\8\n\4\8\z\5\f\s\e\i\w\w\p\0\h\8\0\l\m\a\l\k\6\g\l\x\j\3\5\o\n\8\8\7\a\5\n\c\w\9\j\w\4\e\c\r\d\t\o\q\q\q\r\h\x\o\r\v\n\8\0\e\s\s\n\3\v\j\0\j\7\t\5\z\s\w\w\w\g\g\8\z\y\h\2\k\v\j\y\l\2\z\2\l\0\2\6\o\r\o\g\s\2\i\b\r\d\p\n\i\6\j\4\h\n\2\f\o\u\g\2\g\a\6\h\9\m\x\0\v\u\4\z\d\1\p\l\c\5\4\b\g\n\w\j\q\w\1\6\v\r\r\n\h\c\c\8\m\r\e\z\s\q\b\c\v\d\5\g\9\r\r\o\2\x\g\e\8\i\s\f\a\0\k\3\j\t\6\o\e\e\y\d\f\m\d\7\e\v\p\r\v\b\v\4\f\7\k\k\j\7\b\8\r\e\7\n\e\p\l\n\o\a\t\f\f\j\w\h\3\n\i\n\i\p\d\4\v\f\o\2\m\7\3\m\i\0\e\1\c\9\h\k\b\m\7\9\t\h ]] 00:09:43.446 00:09:43.446 real 0m6.105s 00:09:43.446 user 0m4.921s 00:09:43.446 sys 0m1.727s 00:09:43.446 14:14:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:43.446 ************************************ 00:09:43.446 END TEST dd_flag_nofollow 00:09:43.446 ************************************ 00:09:43.446 14:14:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:09:43.446 14:14:10 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:09:43.446 14:14:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:43.446 14:14:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:43.446 14:14:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:09:43.446 ************************************ 00:09:43.446 START TEST dd_flag_noatime 00:09:43.446 ************************************ 00:09:43.446 14:14:10 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1127 -- # noatime 00:09:43.446 14:14:10 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:09:43.446 14:14:10 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:09:43.446 14:14:10 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:09:43.446 14:14:10 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:09:43.446 14:14:10 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:09:43.446 14:14:10 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:43.446 14:14:10 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1730902449 00:09:43.446 14:14:10 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:43.446 14:14:10 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1730902450 00:09:43.446 14:14:10 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:09:44.410 14:14:11 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:44.410 [2024-11-06 14:14:11.913948] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:09:44.410 [2024-11-06 14:14:11.914141] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62279 ] 00:09:44.668 [2024-11-06 14:14:12.104630] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.668 [2024-11-06 14:14:12.233226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.926 [2024-11-06 14:14:12.456891] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:45.184  [2024-11-06T14:14:13.754Z] Copying: 512/512 [B] (average 500 kBps) 00:09:46.119 00:09:46.378 14:14:13 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:46.378 14:14:13 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1730902449 )) 00:09:46.378 14:14:13 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:46.378 14:14:13 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1730902450 )) 00:09:46.378 14:14:13 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:46.378 [2024-11-06 14:14:13.901847] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:09:46.378 [2024-11-06 14:14:13.902073] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62310 ] 00:09:46.637 [2024-11-06 14:14:14.091133] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:46.637 [2024-11-06 14:14:14.221568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.896 [2024-11-06 14:14:14.439004] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:47.154  [2024-11-06T14:14:15.744Z] Copying: 512/512 [B] (average 500 kBps) 00:09:48.109 00:09:48.368 14:14:15 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:48.368 14:14:15 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1730902454 )) 00:09:48.368 00:09:48.368 real 0m4.997s 00:09:48.368 user 0m3.206s 00:09:48.368 sys 0m2.374s 00:09:48.368 14:14:15 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:48.368 14:14:15 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:09:48.368 ************************************ 00:09:48.368 END TEST dd_flag_noatime 00:09:48.368 ************************************ 00:09:48.368 14:14:15 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:09:48.368 14:14:15 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:48.368 14:14:15 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:48.368 14:14:15 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:09:48.368 ************************************ 00:09:48.368 START TEST dd_flags_misc 00:09:48.368 ************************************ 00:09:48.368 14:14:15 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1127 -- # io 00:09:48.368 14:14:15 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:09:48.368 14:14:15 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:09:48.368 14:14:15 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:09:48.369 14:14:15 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:09:48.369 14:14:15 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:09:48.369 14:14:15 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:09:48.369 14:14:15 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:09:48.369 14:14:15 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:48.369 14:14:15 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:09:48.369 [2024-11-06 14:14:15.963838] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:09:48.369 [2024-11-06 14:14:15.963997] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62356 ] 00:09:48.627 [2024-11-06 14:14:16.153943] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:48.886 [2024-11-06 14:14:16.287487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.886 [2024-11-06 14:14:16.518208] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:49.144  [2024-11-06T14:14:18.158Z] Copying: 512/512 [B] (average 500 kBps) 00:09:50.523 00:09:50.523 14:14:17 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 0tad97wt90xknyrdjrnax09bwkpw45pjk4wz3iu47h0i3v104dgy2bfjz5drrndpmk77trhddwbatghwccm3blvkttdkwzksbzpt7l9kh14cjwla5rapukk9wnksw9bq2ryrsnjfri2d299ydi2p9k4q8bzaomk1pyagcly4si8o3vyppkm1h0tsnv9yp11hdeebqytqe5txbc5y4xje1nusxwxo6eheh6u1r6vreyg57x8jhqycklecsiw43fe1o8cav1ytdg3q9yqiznu19qtx3uetixzfmd44vg6mzkreloe8u2ylyime92n3dukz36xyxgeplwimxdziyxtqrkxq5nt5pkoiory80nmkfhukef905od0zuj4ib7gqxlko2v9sjozdue8d005r8sbjjuy16lujv2y71vk0wnsnkpfq5mof7ycjxhrj59foswbtc1uiui1tp3bfyn8na1wpd6fsrjs12fwtcfxwwjjfpslplptkvefmyl9e2ey1qvg == \0\t\a\d\9\7\w\t\9\0\x\k\n\y\r\d\j\r\n\a\x\0\9\b\w\k\p\w\4\5\p\j\k\4\w\z\3\i\u\4\7\h\0\i\3\v\1\0\4\d\g\y\2\b\f\j\z\5\d\r\r\n\d\p\m\k\7\7\t\r\h\d\d\w\b\a\t\g\h\w\c\c\m\3\b\l\v\k\t\t\d\k\w\z\k\s\b\z\p\t\7\l\9\k\h\1\4\c\j\w\l\a\5\r\a\p\u\k\k\9\w\n\k\s\w\9\b\q\2\r\y\r\s\n\j\f\r\i\2\d\2\9\9\y\d\i\2\p\9\k\4\q\8\b\z\a\o\m\k\1\p\y\a\g\c\l\y\4\s\i\8\o\3\v\y\p\p\k\m\1\h\0\t\s\n\v\9\y\p\1\1\h\d\e\e\b\q\y\t\q\e\5\t\x\b\c\5\y\4\x\j\e\1\n\u\s\x\w\x\o\6\e\h\e\h\6\u\1\r\6\v\r\e\y\g\5\7\x\8\j\h\q\y\c\k\l\e\c\s\i\w\4\3\f\e\1\o\8\c\a\v\1\y\t\d\g\3\q\9\y\q\i\z\n\u\1\9\q\t\x\3\u\e\t\i\x\z\f\m\d\4\4\v\g\6\m\z\k\r\e\l\o\e\8\u\2\y\l\y\i\m\e\9\2\n\3\d\u\k\z\3\6\x\y\x\g\e\p\l\w\i\m\x\d\z\i\y\x\t\q\r\k\x\q\5\n\t\5\p\k\o\i\o\r\y\8\0\n\m\k\f\h\u\k\e\f\9\0\5\o\d\0\z\u\j\4\i\b\7\g\q\x\l\k\o\2\v\9\s\j\o\z\d\u\e\8\d\0\0\5\r\8\s\b\j\j\u\y\1\6\l\u\j\v\2\y\7\1\v\k\0\w\n\s\n\k\p\f\q\5\m\o\f\7\y\c\j\x\h\r\j\5\9\f\o\s\w\b\t\c\1\u\i\u\i\1\t\p\3\b\f\y\n\8\n\a\1\w\p\d\6\f\s\r\j\s\1\2\f\w\t\c\f\x\w\w\j\j\f\p\s\l\p\l\p\t\k\v\e\f\m\y\l\9\e\2\e\y\1\q\v\g ]] 00:09:50.523 14:14:17 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:50.523 14:14:17 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:09:50.523 [2024-11-06 14:14:17.941779] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:09:50.523 [2024-11-06 14:14:17.941938] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62383 ] 00:09:50.523 [2024-11-06 14:14:18.128456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:50.781 [2024-11-06 14:14:18.257998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.040 [2024-11-06 14:14:18.485337] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:51.040  [2024-11-06T14:14:20.053Z] Copying: 512/512 [B] (average 500 kBps) 00:09:52.418 00:09:52.418 14:14:19 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 0tad97wt90xknyrdjrnax09bwkpw45pjk4wz3iu47h0i3v104dgy2bfjz5drrndpmk77trhddwbatghwccm3blvkttdkwzksbzpt7l9kh14cjwla5rapukk9wnksw9bq2ryrsnjfri2d299ydi2p9k4q8bzaomk1pyagcly4si8o3vyppkm1h0tsnv9yp11hdeebqytqe5txbc5y4xje1nusxwxo6eheh6u1r6vreyg57x8jhqycklecsiw43fe1o8cav1ytdg3q9yqiznu19qtx3uetixzfmd44vg6mzkreloe8u2ylyime92n3dukz36xyxgeplwimxdziyxtqrkxq5nt5pkoiory80nmkfhukef905od0zuj4ib7gqxlko2v9sjozdue8d005r8sbjjuy16lujv2y71vk0wnsnkpfq5mof7ycjxhrj59foswbtc1uiui1tp3bfyn8na1wpd6fsrjs12fwtcfxwwjjfpslplptkvefmyl9e2ey1qvg == \0\t\a\d\9\7\w\t\9\0\x\k\n\y\r\d\j\r\n\a\x\0\9\b\w\k\p\w\4\5\p\j\k\4\w\z\3\i\u\4\7\h\0\i\3\v\1\0\4\d\g\y\2\b\f\j\z\5\d\r\r\n\d\p\m\k\7\7\t\r\h\d\d\w\b\a\t\g\h\w\c\c\m\3\b\l\v\k\t\t\d\k\w\z\k\s\b\z\p\t\7\l\9\k\h\1\4\c\j\w\l\a\5\r\a\p\u\k\k\9\w\n\k\s\w\9\b\q\2\r\y\r\s\n\j\f\r\i\2\d\2\9\9\y\d\i\2\p\9\k\4\q\8\b\z\a\o\m\k\1\p\y\a\g\c\l\y\4\s\i\8\o\3\v\y\p\p\k\m\1\h\0\t\s\n\v\9\y\p\1\1\h\d\e\e\b\q\y\t\q\e\5\t\x\b\c\5\y\4\x\j\e\1\n\u\s\x\w\x\o\6\e\h\e\h\6\u\1\r\6\v\r\e\y\g\5\7\x\8\j\h\q\y\c\k\l\e\c\s\i\w\4\3\f\e\1\o\8\c\a\v\1\y\t\d\g\3\q\9\y\q\i\z\n\u\1\9\q\t\x\3\u\e\t\i\x\z\f\m\d\4\4\v\g\6\m\z\k\r\e\l\o\e\8\u\2\y\l\y\i\m\e\9\2\n\3\d\u\k\z\3\6\x\y\x\g\e\p\l\w\i\m\x\d\z\i\y\x\t\q\r\k\x\q\5\n\t\5\p\k\o\i\o\r\y\8\0\n\m\k\f\h\u\k\e\f\9\0\5\o\d\0\z\u\j\4\i\b\7\g\q\x\l\k\o\2\v\9\s\j\o\z\d\u\e\8\d\0\0\5\r\8\s\b\j\j\u\y\1\6\l\u\j\v\2\y\7\1\v\k\0\w\n\s\n\k\p\f\q\5\m\o\f\7\y\c\j\x\h\r\j\5\9\f\o\s\w\b\t\c\1\u\i\u\i\1\t\p\3\b\f\y\n\8\n\a\1\w\p\d\6\f\s\r\j\s\1\2\f\w\t\c\f\x\w\w\j\j\f\p\s\l\p\l\p\t\k\v\e\f\m\y\l\9\e\2\e\y\1\q\v\g ]] 00:09:52.418 14:14:19 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:52.418 14:14:19 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:09:52.418 [2024-11-06 14:14:19.857052] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:09:52.418 [2024-11-06 14:14:19.857229] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62410 ] 00:09:52.418 [2024-11-06 14:14:20.045544] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:52.677 [2024-11-06 14:14:20.168362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.936 [2024-11-06 14:14:20.386049] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:52.936  [2024-11-06T14:14:21.949Z] Copying: 512/512 [B] (average 125 kBps) 00:09:54.314 00:09:54.314 14:14:21 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 0tad97wt90xknyrdjrnax09bwkpw45pjk4wz3iu47h0i3v104dgy2bfjz5drrndpmk77trhddwbatghwccm3blvkttdkwzksbzpt7l9kh14cjwla5rapukk9wnksw9bq2ryrsnjfri2d299ydi2p9k4q8bzaomk1pyagcly4si8o3vyppkm1h0tsnv9yp11hdeebqytqe5txbc5y4xje1nusxwxo6eheh6u1r6vreyg57x8jhqycklecsiw43fe1o8cav1ytdg3q9yqiznu19qtx3uetixzfmd44vg6mzkreloe8u2ylyime92n3dukz36xyxgeplwimxdziyxtqrkxq5nt5pkoiory80nmkfhukef905od0zuj4ib7gqxlko2v9sjozdue8d005r8sbjjuy16lujv2y71vk0wnsnkpfq5mof7ycjxhrj59foswbtc1uiui1tp3bfyn8na1wpd6fsrjs12fwtcfxwwjjfpslplptkvefmyl9e2ey1qvg == \0\t\a\d\9\7\w\t\9\0\x\k\n\y\r\d\j\r\n\a\x\0\9\b\w\k\p\w\4\5\p\j\k\4\w\z\3\i\u\4\7\h\0\i\3\v\1\0\4\d\g\y\2\b\f\j\z\5\d\r\r\n\d\p\m\k\7\7\t\r\h\d\d\w\b\a\t\g\h\w\c\c\m\3\b\l\v\k\t\t\d\k\w\z\k\s\b\z\p\t\7\l\9\k\h\1\4\c\j\w\l\a\5\r\a\p\u\k\k\9\w\n\k\s\w\9\b\q\2\r\y\r\s\n\j\f\r\i\2\d\2\9\9\y\d\i\2\p\9\k\4\q\8\b\z\a\o\m\k\1\p\y\a\g\c\l\y\4\s\i\8\o\3\v\y\p\p\k\m\1\h\0\t\s\n\v\9\y\p\1\1\h\d\e\e\b\q\y\t\q\e\5\t\x\b\c\5\y\4\x\j\e\1\n\u\s\x\w\x\o\6\e\h\e\h\6\u\1\r\6\v\r\e\y\g\5\7\x\8\j\h\q\y\c\k\l\e\c\s\i\w\4\3\f\e\1\o\8\c\a\v\1\y\t\d\g\3\q\9\y\q\i\z\n\u\1\9\q\t\x\3\u\e\t\i\x\z\f\m\d\4\4\v\g\6\m\z\k\r\e\l\o\e\8\u\2\y\l\y\i\m\e\9\2\n\3\d\u\k\z\3\6\x\y\x\g\e\p\l\w\i\m\x\d\z\i\y\x\t\q\r\k\x\q\5\n\t\5\p\k\o\i\o\r\y\8\0\n\m\k\f\h\u\k\e\f\9\0\5\o\d\0\z\u\j\4\i\b\7\g\q\x\l\k\o\2\v\9\s\j\o\z\d\u\e\8\d\0\0\5\r\8\s\b\j\j\u\y\1\6\l\u\j\v\2\y\7\1\v\k\0\w\n\s\n\k\p\f\q\5\m\o\f\7\y\c\j\x\h\r\j\5\9\f\o\s\w\b\t\c\1\u\i\u\i\1\t\p\3\b\f\y\n\8\n\a\1\w\p\d\6\f\s\r\j\s\1\2\f\w\t\c\f\x\w\w\j\j\f\p\s\l\p\l\p\t\k\v\e\f\m\y\l\9\e\2\e\y\1\q\v\g ]] 00:09:54.314 14:14:21 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:54.314 14:14:21 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:09:54.314 [2024-11-06 14:14:21.768540] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:09:54.314 [2024-11-06 14:14:21.768725] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62432 ] 00:09:54.573 [2024-11-06 14:14:21.956111] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:54.573 [2024-11-06 14:14:22.074513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:54.832 [2024-11-06 14:14:22.302506] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:54.832  [2024-11-06T14:14:23.845Z] Copying: 512/512 [B] (average 250 kBps) 00:09:56.210 00:09:56.210 14:14:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 0tad97wt90xknyrdjrnax09bwkpw45pjk4wz3iu47h0i3v104dgy2bfjz5drrndpmk77trhddwbatghwccm3blvkttdkwzksbzpt7l9kh14cjwla5rapukk9wnksw9bq2ryrsnjfri2d299ydi2p9k4q8bzaomk1pyagcly4si8o3vyppkm1h0tsnv9yp11hdeebqytqe5txbc5y4xje1nusxwxo6eheh6u1r6vreyg57x8jhqycklecsiw43fe1o8cav1ytdg3q9yqiznu19qtx3uetixzfmd44vg6mzkreloe8u2ylyime92n3dukz36xyxgeplwimxdziyxtqrkxq5nt5pkoiory80nmkfhukef905od0zuj4ib7gqxlko2v9sjozdue8d005r8sbjjuy16lujv2y71vk0wnsnkpfq5mof7ycjxhrj59foswbtc1uiui1tp3bfyn8na1wpd6fsrjs12fwtcfxwwjjfpslplptkvefmyl9e2ey1qvg == \0\t\a\d\9\7\w\t\9\0\x\k\n\y\r\d\j\r\n\a\x\0\9\b\w\k\p\w\4\5\p\j\k\4\w\z\3\i\u\4\7\h\0\i\3\v\1\0\4\d\g\y\2\b\f\j\z\5\d\r\r\n\d\p\m\k\7\7\t\r\h\d\d\w\b\a\t\g\h\w\c\c\m\3\b\l\v\k\t\t\d\k\w\z\k\s\b\z\p\t\7\l\9\k\h\1\4\c\j\w\l\a\5\r\a\p\u\k\k\9\w\n\k\s\w\9\b\q\2\r\y\r\s\n\j\f\r\i\2\d\2\9\9\y\d\i\2\p\9\k\4\q\8\b\z\a\o\m\k\1\p\y\a\g\c\l\y\4\s\i\8\o\3\v\y\p\p\k\m\1\h\0\t\s\n\v\9\y\p\1\1\h\d\e\e\b\q\y\t\q\e\5\t\x\b\c\5\y\4\x\j\e\1\n\u\s\x\w\x\o\6\e\h\e\h\6\u\1\r\6\v\r\e\y\g\5\7\x\8\j\h\q\y\c\k\l\e\c\s\i\w\4\3\f\e\1\o\8\c\a\v\1\y\t\d\g\3\q\9\y\q\i\z\n\u\1\9\q\t\x\3\u\e\t\i\x\z\f\m\d\4\4\v\g\6\m\z\k\r\e\l\o\e\8\u\2\y\l\y\i\m\e\9\2\n\3\d\u\k\z\3\6\x\y\x\g\e\p\l\w\i\m\x\d\z\i\y\x\t\q\r\k\x\q\5\n\t\5\p\k\o\i\o\r\y\8\0\n\m\k\f\h\u\k\e\f\9\0\5\o\d\0\z\u\j\4\i\b\7\g\q\x\l\k\o\2\v\9\s\j\o\z\d\u\e\8\d\0\0\5\r\8\s\b\j\j\u\y\1\6\l\u\j\v\2\y\7\1\v\k\0\w\n\s\n\k\p\f\q\5\m\o\f\7\y\c\j\x\h\r\j\5\9\f\o\s\w\b\t\c\1\u\i\u\i\1\t\p\3\b\f\y\n\8\n\a\1\w\p\d\6\f\s\r\j\s\1\2\f\w\t\c\f\x\w\w\j\j\f\p\s\l\p\l\p\t\k\v\e\f\m\y\l\9\e\2\e\y\1\q\v\g ]] 00:09:56.210 14:14:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:09:56.210 14:14:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:09:56.210 14:14:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:09:56.210 14:14:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:09:56.210 14:14:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:56.210 14:14:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:09:56.210 [2024-11-06 14:14:23.734262] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:09:56.210 [2024-11-06 14:14:23.734414] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62459 ] 00:09:56.468 [2024-11-06 14:14:23.930066] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:56.468 [2024-11-06 14:14:24.062058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.727 [2024-11-06 14:14:24.292792] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:56.986  [2024-11-06T14:14:25.997Z] Copying: 512/512 [B] (average 500 kBps) 00:09:58.362 00:09:58.362 14:14:25 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ xqhcawveph2o2rojlwjgrnej7cumhhauleif8n1hm7lac0sg186cm7hg3f5au2rdlj3ybwe0wmu970l2utwpe84dqgkp69lkiibhlxy8aj3u8azi10a3yhtd621g2p36yjfhuik3ghj8hzo696j1rrvojbcchz6evu0ejknq3t3lx5ues1ra7ejj3y0lephnvu3rrvb9lf9ew35padjkzr1qlz3ywo9ayd71eok8v81hi1idcs6aw4gygwgtwlz1w8lwi8sgdnoknvff5jtus8lizz1njebehi3z92b3or1z20nc79vswlyy4ef46yja6h8iua4zdaivtih1tm8nv1v3522851is30fxg5phjn1ffxhn5riinia6ipk346q85n3gfc5y4ctl35cj19irus9s5yvplndpcein67znurd612mx0q36t59pld7ifjzjjkhbcsfmifedq378r2wxwf4d1k68lsh4036dz47luyx11exxfb47up7lwa67kt6m == \x\q\h\c\a\w\v\e\p\h\2\o\2\r\o\j\l\w\j\g\r\n\e\j\7\c\u\m\h\h\a\u\l\e\i\f\8\n\1\h\m\7\l\a\c\0\s\g\1\8\6\c\m\7\h\g\3\f\5\a\u\2\r\d\l\j\3\y\b\w\e\0\w\m\u\9\7\0\l\2\u\t\w\p\e\8\4\d\q\g\k\p\6\9\l\k\i\i\b\h\l\x\y\8\a\j\3\u\8\a\z\i\1\0\a\3\y\h\t\d\6\2\1\g\2\p\3\6\y\j\f\h\u\i\k\3\g\h\j\8\h\z\o\6\9\6\j\1\r\r\v\o\j\b\c\c\h\z\6\e\v\u\0\e\j\k\n\q\3\t\3\l\x\5\u\e\s\1\r\a\7\e\j\j\3\y\0\l\e\p\h\n\v\u\3\r\r\v\b\9\l\f\9\e\w\3\5\p\a\d\j\k\z\r\1\q\l\z\3\y\w\o\9\a\y\d\7\1\e\o\k\8\v\8\1\h\i\1\i\d\c\s\6\a\w\4\g\y\g\w\g\t\w\l\z\1\w\8\l\w\i\8\s\g\d\n\o\k\n\v\f\f\5\j\t\u\s\8\l\i\z\z\1\n\j\e\b\e\h\i\3\z\9\2\b\3\o\r\1\z\2\0\n\c\7\9\v\s\w\l\y\y\4\e\f\4\6\y\j\a\6\h\8\i\u\a\4\z\d\a\i\v\t\i\h\1\t\m\8\n\v\1\v\3\5\2\2\8\5\1\i\s\3\0\f\x\g\5\p\h\j\n\1\f\f\x\h\n\5\r\i\i\n\i\a\6\i\p\k\3\4\6\q\8\5\n\3\g\f\c\5\y\4\c\t\l\3\5\c\j\1\9\i\r\u\s\9\s\5\y\v\p\l\n\d\p\c\e\i\n\6\7\z\n\u\r\d\6\1\2\m\x\0\q\3\6\t\5\9\p\l\d\7\i\f\j\z\j\j\k\h\b\c\s\f\m\i\f\e\d\q\3\7\8\r\2\w\x\w\f\4\d\1\k\6\8\l\s\h\4\0\3\6\d\z\4\7\l\u\y\x\1\1\e\x\x\f\b\4\7\u\p\7\l\w\a\6\7\k\t\6\m ]] 00:09:58.362 14:14:25 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:58.362 14:14:25 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:09:58.362 [2024-11-06 14:14:25.783012] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:09:58.362 [2024-11-06 14:14:25.783244] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62486 ] 00:09:58.362 [2024-11-06 14:14:25.983006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:58.621 [2024-11-06 14:14:26.118116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.879 [2024-11-06 14:14:26.353835] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:58.879  [2024-11-06T14:14:27.891Z] Copying: 512/512 [B] (average 500 kBps) 00:10:00.256 00:10:00.256 14:14:27 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ xqhcawveph2o2rojlwjgrnej7cumhhauleif8n1hm7lac0sg186cm7hg3f5au2rdlj3ybwe0wmu970l2utwpe84dqgkp69lkiibhlxy8aj3u8azi10a3yhtd621g2p36yjfhuik3ghj8hzo696j1rrvojbcchz6evu0ejknq3t3lx5ues1ra7ejj3y0lephnvu3rrvb9lf9ew35padjkzr1qlz3ywo9ayd71eok8v81hi1idcs6aw4gygwgtwlz1w8lwi8sgdnoknvff5jtus8lizz1njebehi3z92b3or1z20nc79vswlyy4ef46yja6h8iua4zdaivtih1tm8nv1v3522851is30fxg5phjn1ffxhn5riinia6ipk346q85n3gfc5y4ctl35cj19irus9s5yvplndpcein67znurd612mx0q36t59pld7ifjzjjkhbcsfmifedq378r2wxwf4d1k68lsh4036dz47luyx11exxfb47up7lwa67kt6m == \x\q\h\c\a\w\v\e\p\h\2\o\2\r\o\j\l\w\j\g\r\n\e\j\7\c\u\m\h\h\a\u\l\e\i\f\8\n\1\h\m\7\l\a\c\0\s\g\1\8\6\c\m\7\h\g\3\f\5\a\u\2\r\d\l\j\3\y\b\w\e\0\w\m\u\9\7\0\l\2\u\t\w\p\e\8\4\d\q\g\k\p\6\9\l\k\i\i\b\h\l\x\y\8\a\j\3\u\8\a\z\i\1\0\a\3\y\h\t\d\6\2\1\g\2\p\3\6\y\j\f\h\u\i\k\3\g\h\j\8\h\z\o\6\9\6\j\1\r\r\v\o\j\b\c\c\h\z\6\e\v\u\0\e\j\k\n\q\3\t\3\l\x\5\u\e\s\1\r\a\7\e\j\j\3\y\0\l\e\p\h\n\v\u\3\r\r\v\b\9\l\f\9\e\w\3\5\p\a\d\j\k\z\r\1\q\l\z\3\y\w\o\9\a\y\d\7\1\e\o\k\8\v\8\1\h\i\1\i\d\c\s\6\a\w\4\g\y\g\w\g\t\w\l\z\1\w\8\l\w\i\8\s\g\d\n\o\k\n\v\f\f\5\j\t\u\s\8\l\i\z\z\1\n\j\e\b\e\h\i\3\z\9\2\b\3\o\r\1\z\2\0\n\c\7\9\v\s\w\l\y\y\4\e\f\4\6\y\j\a\6\h\8\i\u\a\4\z\d\a\i\v\t\i\h\1\t\m\8\n\v\1\v\3\5\2\2\8\5\1\i\s\3\0\f\x\g\5\p\h\j\n\1\f\f\x\h\n\5\r\i\i\n\i\a\6\i\p\k\3\4\6\q\8\5\n\3\g\f\c\5\y\4\c\t\l\3\5\c\j\1\9\i\r\u\s\9\s\5\y\v\p\l\n\d\p\c\e\i\n\6\7\z\n\u\r\d\6\1\2\m\x\0\q\3\6\t\5\9\p\l\d\7\i\f\j\z\j\j\k\h\b\c\s\f\m\i\f\e\d\q\3\7\8\r\2\w\x\w\f\4\d\1\k\6\8\l\s\h\4\0\3\6\d\z\4\7\l\u\y\x\1\1\e\x\x\f\b\4\7\u\p\7\l\w\a\6\7\k\t\6\m ]] 00:10:00.256 14:14:27 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:10:00.256 14:14:27 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:10:00.256 [2024-11-06 14:14:27.859276] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:10:00.256 [2024-11-06 14:14:27.859439] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62513 ] 00:10:00.515 [2024-11-06 14:14:28.049587] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:00.774 [2024-11-06 14:14:28.184757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.033 [2024-11-06 14:14:28.418645] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:01.033  [2024-11-06T14:14:30.045Z] Copying: 512/512 [B] (average 166 kBps) 00:10:02.410 00:10:02.410 14:14:29 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ xqhcawveph2o2rojlwjgrnej7cumhhauleif8n1hm7lac0sg186cm7hg3f5au2rdlj3ybwe0wmu970l2utwpe84dqgkp69lkiibhlxy8aj3u8azi10a3yhtd621g2p36yjfhuik3ghj8hzo696j1rrvojbcchz6evu0ejknq3t3lx5ues1ra7ejj3y0lephnvu3rrvb9lf9ew35padjkzr1qlz3ywo9ayd71eok8v81hi1idcs6aw4gygwgtwlz1w8lwi8sgdnoknvff5jtus8lizz1njebehi3z92b3or1z20nc79vswlyy4ef46yja6h8iua4zdaivtih1tm8nv1v3522851is30fxg5phjn1ffxhn5riinia6ipk346q85n3gfc5y4ctl35cj19irus9s5yvplndpcein67znurd612mx0q36t59pld7ifjzjjkhbcsfmifedq378r2wxwf4d1k68lsh4036dz47luyx11exxfb47up7lwa67kt6m == \x\q\h\c\a\w\v\e\p\h\2\o\2\r\o\j\l\w\j\g\r\n\e\j\7\c\u\m\h\h\a\u\l\e\i\f\8\n\1\h\m\7\l\a\c\0\s\g\1\8\6\c\m\7\h\g\3\f\5\a\u\2\r\d\l\j\3\y\b\w\e\0\w\m\u\9\7\0\l\2\u\t\w\p\e\8\4\d\q\g\k\p\6\9\l\k\i\i\b\h\l\x\y\8\a\j\3\u\8\a\z\i\1\0\a\3\y\h\t\d\6\2\1\g\2\p\3\6\y\j\f\h\u\i\k\3\g\h\j\8\h\z\o\6\9\6\j\1\r\r\v\o\j\b\c\c\h\z\6\e\v\u\0\e\j\k\n\q\3\t\3\l\x\5\u\e\s\1\r\a\7\e\j\j\3\y\0\l\e\p\h\n\v\u\3\r\r\v\b\9\l\f\9\e\w\3\5\p\a\d\j\k\z\r\1\q\l\z\3\y\w\o\9\a\y\d\7\1\e\o\k\8\v\8\1\h\i\1\i\d\c\s\6\a\w\4\g\y\g\w\g\t\w\l\z\1\w\8\l\w\i\8\s\g\d\n\o\k\n\v\f\f\5\j\t\u\s\8\l\i\z\z\1\n\j\e\b\e\h\i\3\z\9\2\b\3\o\r\1\z\2\0\n\c\7\9\v\s\w\l\y\y\4\e\f\4\6\y\j\a\6\h\8\i\u\a\4\z\d\a\i\v\t\i\h\1\t\m\8\n\v\1\v\3\5\2\2\8\5\1\i\s\3\0\f\x\g\5\p\h\j\n\1\f\f\x\h\n\5\r\i\i\n\i\a\6\i\p\k\3\4\6\q\8\5\n\3\g\f\c\5\y\4\c\t\l\3\5\c\j\1\9\i\r\u\s\9\s\5\y\v\p\l\n\d\p\c\e\i\n\6\7\z\n\u\r\d\6\1\2\m\x\0\q\3\6\t\5\9\p\l\d\7\i\f\j\z\j\j\k\h\b\c\s\f\m\i\f\e\d\q\3\7\8\r\2\w\x\w\f\4\d\1\k\6\8\l\s\h\4\0\3\6\d\z\4\7\l\u\y\x\1\1\e\x\x\f\b\4\7\u\p\7\l\w\a\6\7\k\t\6\m ]] 00:10:02.410 14:14:29 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:10:02.410 14:14:29 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:10:02.410 [2024-11-06 14:14:29.861131] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:10:02.410 [2024-11-06 14:14:29.861298] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62540 ] 00:10:02.669 [2024-11-06 14:14:30.052392] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:02.669 [2024-11-06 14:14:30.190467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.927 [2024-11-06 14:14:30.421380] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:02.927  [2024-11-06T14:14:31.972Z] Copying: 512/512 [B] (average 125 kBps) 00:10:04.337 00:10:04.338 14:14:31 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ xqhcawveph2o2rojlwjgrnej7cumhhauleif8n1hm7lac0sg186cm7hg3f5au2rdlj3ybwe0wmu970l2utwpe84dqgkp69lkiibhlxy8aj3u8azi10a3yhtd621g2p36yjfhuik3ghj8hzo696j1rrvojbcchz6evu0ejknq3t3lx5ues1ra7ejj3y0lephnvu3rrvb9lf9ew35padjkzr1qlz3ywo9ayd71eok8v81hi1idcs6aw4gygwgtwlz1w8lwi8sgdnoknvff5jtus8lizz1njebehi3z92b3or1z20nc79vswlyy4ef46yja6h8iua4zdaivtih1tm8nv1v3522851is30fxg5phjn1ffxhn5riinia6ipk346q85n3gfc5y4ctl35cj19irus9s5yvplndpcein67znurd612mx0q36t59pld7ifjzjjkhbcsfmifedq378r2wxwf4d1k68lsh4036dz47luyx11exxfb47up7lwa67kt6m == \x\q\h\c\a\w\v\e\p\h\2\o\2\r\o\j\l\w\j\g\r\n\e\j\7\c\u\m\h\h\a\u\l\e\i\f\8\n\1\h\m\7\l\a\c\0\s\g\1\8\6\c\m\7\h\g\3\f\5\a\u\2\r\d\l\j\3\y\b\w\e\0\w\m\u\9\7\0\l\2\u\t\w\p\e\8\4\d\q\g\k\p\6\9\l\k\i\i\b\h\l\x\y\8\a\j\3\u\8\a\z\i\1\0\a\3\y\h\t\d\6\2\1\g\2\p\3\6\y\j\f\h\u\i\k\3\g\h\j\8\h\z\o\6\9\6\j\1\r\r\v\o\j\b\c\c\h\z\6\e\v\u\0\e\j\k\n\q\3\t\3\l\x\5\u\e\s\1\r\a\7\e\j\j\3\y\0\l\e\p\h\n\v\u\3\r\r\v\b\9\l\f\9\e\w\3\5\p\a\d\j\k\z\r\1\q\l\z\3\y\w\o\9\a\y\d\7\1\e\o\k\8\v\8\1\h\i\1\i\d\c\s\6\a\w\4\g\y\g\w\g\t\w\l\z\1\w\8\l\w\i\8\s\g\d\n\o\k\n\v\f\f\5\j\t\u\s\8\l\i\z\z\1\n\j\e\b\e\h\i\3\z\9\2\b\3\o\r\1\z\2\0\n\c\7\9\v\s\w\l\y\y\4\e\f\4\6\y\j\a\6\h\8\i\u\a\4\z\d\a\i\v\t\i\h\1\t\m\8\n\v\1\v\3\5\2\2\8\5\1\i\s\3\0\f\x\g\5\p\h\j\n\1\f\f\x\h\n\5\r\i\i\n\i\a\6\i\p\k\3\4\6\q\8\5\n\3\g\f\c\5\y\4\c\t\l\3\5\c\j\1\9\i\r\u\s\9\s\5\y\v\p\l\n\d\p\c\e\i\n\6\7\z\n\u\r\d\6\1\2\m\x\0\q\3\6\t\5\9\p\l\d\7\i\f\j\z\j\j\k\h\b\c\s\f\m\i\f\e\d\q\3\7\8\r\2\w\x\w\f\4\d\1\k\6\8\l\s\h\4\0\3\6\d\z\4\7\l\u\y\x\1\1\e\x\x\f\b\4\7\u\p\7\l\w\a\6\7\k\t\6\m ]] 00:10:04.338 00:10:04.338 real 0m15.935s 00:10:04.338 user 0m12.895s 00:10:04.338 sys 0m9.361s 00:10:04.338 ************************************ 00:10:04.338 END TEST dd_flags_misc 00:10:04.338 ************************************ 00:10:04.338 14:14:31 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:04.338 14:14:31 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:10:04.338 14:14:31 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:10:04.338 14:14:31 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:10:04.338 * Second test run, disabling liburing, forcing AIO 00:10:04.338 14:14:31 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:10:04.338 14:14:31 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:10:04.338 14:14:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:04.338 14:14:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:04.338 14:14:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:10:04.338 ************************************ 00:10:04.338 START TEST dd_flag_append_forced_aio 00:10:04.338 ************************************ 00:10:04.338 14:14:31 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1127 -- # append 00:10:04.338 14:14:31 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:10:04.338 14:14:31 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:10:04.338 14:14:31 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:10:04.338 14:14:31 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:10:04.338 14:14:31 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:10:04.338 14:14:31 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=uxu3lzrn0bgvnz2rp7d6e89mzqnl23wb 00:10:04.338 14:14:31 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:10:04.338 14:14:31 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:10:04.338 14:14:31 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:10:04.338 14:14:31 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=vtbnjxw2pkh2klsb9oziws3n95rd731e 00:10:04.338 14:14:31 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s uxu3lzrn0bgvnz2rp7d6e89mzqnl23wb 00:10:04.338 14:14:31 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s vtbnjxw2pkh2klsb9oziws3n95rd731e 00:10:04.338 14:14:31 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:10:04.338 [2024-11-06 14:14:31.960574] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:10:04.338 [2024-11-06 14:14:31.960731] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62586 ] 00:10:04.596 [2024-11-06 14:14:32.147584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:04.854 [2024-11-06 14:14:32.282725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.111 [2024-11-06 14:14:32.515112] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:05.112  [2024-11-06T14:14:34.122Z] Copying: 32/32 [B] (average 31 kBps) 00:10:06.487 00:10:06.487 14:14:33 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ vtbnjxw2pkh2klsb9oziws3n95rd731euxu3lzrn0bgvnz2rp7d6e89mzqnl23wb == \v\t\b\n\j\x\w\2\p\k\h\2\k\l\s\b\9\o\z\i\w\s\3\n\9\5\r\d\7\3\1\e\u\x\u\3\l\z\r\n\0\b\g\v\n\z\2\r\p\7\d\6\e\8\9\m\z\q\n\l\2\3\w\b ]] 00:10:06.487 00:10:06.487 real 0m2.070s 00:10:06.487 user 0m1.695s 00:10:06.487 sys 0m0.249s 00:10:06.487 ************************************ 00:10:06.487 END TEST dd_flag_append_forced_aio 00:10:06.487 ************************************ 00:10:06.487 14:14:33 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:06.487 14:14:33 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:10:06.487 14:14:33 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:10:06.487 14:14:33 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:06.487 14:14:33 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:06.487 14:14:33 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:10:06.487 ************************************ 00:10:06.487 START TEST dd_flag_directory_forced_aio 00:10:06.487 ************************************ 00:10:06.487 14:14:33 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1127 -- # directory 00:10:06.487 14:14:33 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:06.487 14:14:33 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:10:06.487 14:14:33 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:06.487 14:14:33 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:06.487 14:14:33 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:06.487 14:14:33 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:06.487 14:14:33 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:06.487 14:14:33 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:06.487 14:14:33 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:06.487 14:14:33 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:06.487 14:14:33 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:06.487 14:14:33 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:06.487 [2024-11-06 14:14:34.103007] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:10:06.487 [2024-11-06 14:14:34.103175] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62630 ] 00:10:06.746 [2024-11-06 14:14:34.294739] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:07.005 [2024-11-06 14:14:34.427319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.263 [2024-11-06 14:14:34.664862] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:07.263 [2024-11-06 14:14:34.799712] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:10:07.263 [2024-11-06 14:14:34.799802] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:10:07.263 [2024-11-06 14:14:34.799860] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:08.200 [2024-11-06 14:14:35.734991] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:10:08.460 14:14:36 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # es=236 00:10:08.460 14:14:36 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:08.460 14:14:36 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@662 -- # es=108 00:10:08.460 14:14:36 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:10:08.460 14:14:36 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:10:08.460 14:14:36 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:08.460 14:14:36 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:10:08.460 14:14:36 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:10:08.460 14:14:36 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:10:08.460 14:14:36 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:08.460 14:14:36 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:08.460 14:14:36 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:08.460 14:14:36 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:08.460 14:14:36 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:08.460 14:14:36 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:08.460 14:14:36 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:08.461 14:14:36 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:08.461 14:14:36 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:10:08.750 [2024-11-06 14:14:36.147515] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:10:08.750 [2024-11-06 14:14:36.147684] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62657 ] 00:10:08.750 [2024-11-06 14:14:36.340013] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:09.008 [2024-11-06 14:14:36.476411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.266 [2024-11-06 14:14:36.712381] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:09.266 [2024-11-06 14:14:36.843353] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:10:09.266 [2024-11-06 14:14:36.843420] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:10:09.266 [2024-11-06 14:14:36.843450] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:10.203 [2024-11-06 14:14:37.769991] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:10:10.461 14:14:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # es=236 00:10:10.461 14:14:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:10.461 14:14:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@662 -- # es=108 00:10:10.461 14:14:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:10:10.461 14:14:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:10:10.461 14:14:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:10.461 00:10:10.461 real 0m4.090s 00:10:10.461 user 0m3.322s 00:10:10.461 sys 0m0.539s 00:10:10.461 ************************************ 00:10:10.461 END TEST dd_flag_directory_forced_aio 00:10:10.461 ************************************ 00:10:10.461 14:14:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:10.461 14:14:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:10:10.720 14:14:38 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:10:10.720 14:14:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:10.720 14:14:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:10.720 14:14:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:10:10.720 ************************************ 00:10:10.720 START TEST dd_flag_nofollow_forced_aio 00:10:10.720 ************************************ 00:10:10.720 14:14:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1127 -- # nofollow 00:10:10.720 14:14:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:10:10.720 14:14:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:10:10.720 14:14:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:10:10.720 14:14:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:10:10.720 14:14:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:10.720 14:14:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:10:10.720 14:14:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:10.720 14:14:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:10.720 14:14:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:10.720 14:14:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:10.720 14:14:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:10.720 14:14:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:10.720 14:14:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:10.720 14:14:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:10.720 14:14:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:10.720 14:14:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:10.720 [2024-11-06 14:14:38.283555] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:10:10.720 [2024-11-06 14:14:38.283723] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62703 ] 00:10:10.978 [2024-11-06 14:14:38.473644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:10.978 [2024-11-06 14:14:38.612386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.268 [2024-11-06 14:14:38.841968] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:11.527 [2024-11-06 14:14:38.970195] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:10:11.527 [2024-11-06 14:14:38.970278] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:10:11.527 [2024-11-06 14:14:38.970307] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:12.462 [2024-11-06 14:14:39.905780] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:10:12.721 14:14:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # es=216 00:10:12.721 14:14:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:12.721 14:14:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@662 -- # es=88 00:10:12.721 14:14:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:10:12.721 14:14:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:10:12.721 14:14:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:12.721 14:14:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:10:12.721 14:14:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:10:12.721 14:14:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:10:12.721 14:14:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:12.721 14:14:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:12.721 14:14:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:12.721 14:14:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:12.721 14:14:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:12.721 14:14:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:12.721 14:14:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:12.721 14:14:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:12.721 14:14:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:10:12.721 [2024-11-06 14:14:40.331526] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:10:12.721 [2024-11-06 14:14:40.331688] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62725 ] 00:10:12.980 [2024-11-06 14:14:40.524655] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:13.240 [2024-11-06 14:14:40.661411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.500 [2024-11-06 14:14:40.895838] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:13.500 [2024-11-06 14:14:41.024908] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:10:13.500 [2024-11-06 14:14:41.024985] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:10:13.500 [2024-11-06 14:14:41.025014] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:14.438 [2024-11-06 14:14:41.950907] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:10:14.697 14:14:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # es=216 00:10:14.697 14:14:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:14.697 14:14:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@662 -- # es=88 00:10:14.697 14:14:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:10:14.697 14:14:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:10:14.697 14:14:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:14.697 14:14:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:10:14.697 14:14:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:10:14.697 14:14:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:10:14.697 14:14:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:14.956 [2024-11-06 14:14:42.375238] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:10:14.956 [2024-11-06 14:14:42.375418] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62750 ] 00:10:15.215 [2024-11-06 14:14:42.609487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:15.215 [2024-11-06 14:14:42.739174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.474 [2024-11-06 14:14:42.957712] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:15.474  [2024-11-06T14:14:44.485Z] Copying: 512/512 [B] (average 500 kBps) 00:10:16.850 00:10:16.850 14:14:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ 4ex0e52lnv431qu36encps91qx738ud8jixzxkg19sgeh5yyy55jao3xpnhe9yyzddw1szrm6rgupw7zest4ooj3zerkp7ep4yg0qhdv691fv410p979etj87f7zjcckfprcwy223xg41oy0cho5ggzx7ix2gdnegv8oifzq4x5bhnffmxzfwwofyhzpcbdsk81b46z74mvydm8ozsw6yhsiaca7jtrc4cefrq4h4vot498qve1g7h8pfxbiiml0jih7ylam3i7cys4xqfg2vw1wah0n76yz6qdw4tzea70n9l9d064gmz371iwd5x1evanplzivjyq23sc32crylcyiovufeo4hb8c3zgqn71mz71lpnrar8y55i34k0o3p4ujzk0j4st1k8wqbfrloj8u5uitnrk892i86atkq3smmd4dyu4xudibst7ronwp3d37ivn0n4kg58wpedgigfpgmt5sjp0act2jdqa62ichdumb47zc7u89taofdnil8 == \4\e\x\0\e\5\2\l\n\v\4\3\1\q\u\3\6\e\n\c\p\s\9\1\q\x\7\3\8\u\d\8\j\i\x\z\x\k\g\1\9\s\g\e\h\5\y\y\y\5\5\j\a\o\3\x\p\n\h\e\9\y\y\z\d\d\w\1\s\z\r\m\6\r\g\u\p\w\7\z\e\s\t\4\o\o\j\3\z\e\r\k\p\7\e\p\4\y\g\0\q\h\d\v\6\9\1\f\v\4\1\0\p\9\7\9\e\t\j\8\7\f\7\z\j\c\c\k\f\p\r\c\w\y\2\2\3\x\g\4\1\o\y\0\c\h\o\5\g\g\z\x\7\i\x\2\g\d\n\e\g\v\8\o\i\f\z\q\4\x\5\b\h\n\f\f\m\x\z\f\w\w\o\f\y\h\z\p\c\b\d\s\k\8\1\b\4\6\z\7\4\m\v\y\d\m\8\o\z\s\w\6\y\h\s\i\a\c\a\7\j\t\r\c\4\c\e\f\r\q\4\h\4\v\o\t\4\9\8\q\v\e\1\g\7\h\8\p\f\x\b\i\i\m\l\0\j\i\h\7\y\l\a\m\3\i\7\c\y\s\4\x\q\f\g\2\v\w\1\w\a\h\0\n\7\6\y\z\6\q\d\w\4\t\z\e\a\7\0\n\9\l\9\d\0\6\4\g\m\z\3\7\1\i\w\d\5\x\1\e\v\a\n\p\l\z\i\v\j\y\q\2\3\s\c\3\2\c\r\y\l\c\y\i\o\v\u\f\e\o\4\h\b\8\c\3\z\g\q\n\7\1\m\z\7\1\l\p\n\r\a\r\8\y\5\5\i\3\4\k\0\o\3\p\4\u\j\z\k\0\j\4\s\t\1\k\8\w\q\b\f\r\l\o\j\8\u\5\u\i\t\n\r\k\8\9\2\i\8\6\a\t\k\q\3\s\m\m\d\4\d\y\u\4\x\u\d\i\b\s\t\7\r\o\n\w\p\3\d\3\7\i\v\n\0\n\4\k\g\5\8\w\p\e\d\g\i\g\f\p\g\m\t\5\s\j\p\0\a\c\t\2\j\d\q\a\6\2\i\c\h\d\u\m\b\4\7\z\c\7\u\8\9\t\a\o\f\d\n\i\l\8 ]] 00:10:16.850 00:10:16.850 real 0m6.124s 00:10:16.850 user 0m4.941s 00:10:16.850 sys 0m0.826s 00:10:16.850 ************************************ 00:10:16.850 END TEST dd_flag_nofollow_forced_aio 00:10:16.850 ************************************ 00:10:16.850 14:14:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:16.850 14:14:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:10:16.850 14:14:44 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:10:16.850 14:14:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:16.850 14:14:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:16.850 14:14:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:10:16.850 ************************************ 00:10:16.850 START TEST dd_flag_noatime_forced_aio 00:10:16.850 ************************************ 00:10:16.850 14:14:44 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1127 -- # noatime 00:10:16.850 14:14:44 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:10:16.850 14:14:44 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:10:16.850 14:14:44 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:10:16.850 14:14:44 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:10:16.850 14:14:44 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:10:16.850 14:14:44 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:16.850 14:14:44 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1730902483 00:10:16.850 14:14:44 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:16.850 14:14:44 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1730902484 00:10:16.850 14:14:44 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:10:17.784 14:14:45 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:18.044 [2024-11-06 14:14:45.506496] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:10:18.044 [2024-11-06 14:14:45.506703] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62808 ] 00:10:18.302 [2024-11-06 14:14:45.684932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:18.302 [2024-11-06 14:14:45.819788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.561 [2024-11-06 14:14:46.057330] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:18.561  [2024-11-06T14:14:47.571Z] Copying: 512/512 [B] (average 500 kBps) 00:10:19.936 00:10:19.936 14:14:47 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:19.936 14:14:47 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1730902483 )) 00:10:19.936 14:14:47 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:19.936 14:14:47 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1730902484 )) 00:10:19.936 14:14:47 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:19.936 [2024-11-06 14:14:47.560559] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:10:19.936 [2024-11-06 14:14:47.560730] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62837 ] 00:10:20.192 [2024-11-06 14:14:47.736042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:20.449 [2024-11-06 14:14:47.878614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.707 [2024-11-06 14:14:48.115639] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:20.707  [2024-11-06T14:14:49.715Z] Copying: 512/512 [B] (average 500 kBps) 00:10:22.080 00:10:22.080 14:14:49 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:22.080 14:14:49 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1730902488 )) 00:10:22.080 00:10:22.080 real 0m5.122s 00:10:22.080 user 0m3.280s 00:10:22.080 sys 0m0.590s 00:10:22.080 ************************************ 00:10:22.080 END TEST dd_flag_noatime_forced_aio 00:10:22.080 ************************************ 00:10:22.080 14:14:49 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:22.080 14:14:49 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:10:22.080 14:14:49 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:10:22.080 14:14:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:22.080 14:14:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:22.080 14:14:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:10:22.080 ************************************ 00:10:22.080 START TEST dd_flags_misc_forced_aio 00:10:22.080 ************************************ 00:10:22.080 14:14:49 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1127 -- # io 00:10:22.080 14:14:49 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:10:22.080 14:14:49 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:10:22.080 14:14:49 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:10:22.080 14:14:49 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:10:22.080 14:14:49 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:10:22.080 14:14:49 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:10:22.080 14:14:49 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:10:22.080 14:14:49 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:10:22.080 14:14:49 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:10:22.080 [2024-11-06 14:14:49.676746] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:10:22.080 [2024-11-06 14:14:49.676921] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62881 ] 00:10:22.337 [2024-11-06 14:14:49.875394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:22.595 [2024-11-06 14:14:50.009579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:22.852 [2024-11-06 14:14:50.241358] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:22.852  [2024-11-06T14:14:51.859Z] Copying: 512/512 [B] (average 500 kBps) 00:10:24.224 00:10:24.225 14:14:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ cb612wll9fxdp6abwju0a1cbpfbe38mj609wqrj3ldczcrgjtq1pg5mm2oj91o09fkasi92t060p0k1w4ou6ul1e2jk8mc8i9rbfi4gksl5zo6bvj2b9fzrro7gvacxv5hucm8fv9ij5586apvks3ejxwk4khlo9ob7nls8gy5ipm486o2fj7kkddgs93neckdjee0n9m4z8ibg6b76ik98wf299olto5venw2hlqqyrfcfozzjj1m6aoo3v784zk3npsxu05ul0h3lhpkrlrz9jbfi8qr8ghvn4xthenyb31mtmtpm5610uzbsrsf6f29qx8g8hpcu2d7p5qywel8rmvf200pormf39853re65iy7ehyy3bazytz8tyzxqunqiuxd0zimfchbv23s0g0enc8bp2rjpbpvxky5x9t29ps7b0426tcgpvbl4qfgrtwkf4plk4i7bsldllojvwfwpzr9bozv380aiq6zsmzbp1vvzjvsqgg8xdpmz5lc5q == \c\b\6\1\2\w\l\l\9\f\x\d\p\6\a\b\w\j\u\0\a\1\c\b\p\f\b\e\3\8\m\j\6\0\9\w\q\r\j\3\l\d\c\z\c\r\g\j\t\q\1\p\g\5\m\m\2\o\j\9\1\o\0\9\f\k\a\s\i\9\2\t\0\6\0\p\0\k\1\w\4\o\u\6\u\l\1\e\2\j\k\8\m\c\8\i\9\r\b\f\i\4\g\k\s\l\5\z\o\6\b\v\j\2\b\9\f\z\r\r\o\7\g\v\a\c\x\v\5\h\u\c\m\8\f\v\9\i\j\5\5\8\6\a\p\v\k\s\3\e\j\x\w\k\4\k\h\l\o\9\o\b\7\n\l\s\8\g\y\5\i\p\m\4\8\6\o\2\f\j\7\k\k\d\d\g\s\9\3\n\e\c\k\d\j\e\e\0\n\9\m\4\z\8\i\b\g\6\b\7\6\i\k\9\8\w\f\2\9\9\o\l\t\o\5\v\e\n\w\2\h\l\q\q\y\r\f\c\f\o\z\z\j\j\1\m\6\a\o\o\3\v\7\8\4\z\k\3\n\p\s\x\u\0\5\u\l\0\h\3\l\h\p\k\r\l\r\z\9\j\b\f\i\8\q\r\8\g\h\v\n\4\x\t\h\e\n\y\b\3\1\m\t\m\t\p\m\5\6\1\0\u\z\b\s\r\s\f\6\f\2\9\q\x\8\g\8\h\p\c\u\2\d\7\p\5\q\y\w\e\l\8\r\m\v\f\2\0\0\p\o\r\m\f\3\9\8\5\3\r\e\6\5\i\y\7\e\h\y\y\3\b\a\z\y\t\z\8\t\y\z\x\q\u\n\q\i\u\x\d\0\z\i\m\f\c\h\b\v\2\3\s\0\g\0\e\n\c\8\b\p\2\r\j\p\b\p\v\x\k\y\5\x\9\t\2\9\p\s\7\b\0\4\2\6\t\c\g\p\v\b\l\4\q\f\g\r\t\w\k\f\4\p\l\k\4\i\7\b\s\l\d\l\l\o\j\v\w\f\w\p\z\r\9\b\o\z\v\3\8\0\a\i\q\6\z\s\m\z\b\p\1\v\v\z\j\v\s\q\g\g\8\x\d\p\m\z\5\l\c\5\q ]] 00:10:24.225 14:14:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:10:24.225 14:14:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:10:24.225 [2024-11-06 14:14:51.736678] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:10:24.225 [2024-11-06 14:14:51.736860] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62906 ] 00:10:24.482 [2024-11-06 14:14:51.935342] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:24.482 [2024-11-06 14:14:52.070502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.740 [2024-11-06 14:14:52.307221] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:24.999  [2024-11-06T14:14:54.007Z] Copying: 512/512 [B] (average 500 kBps) 00:10:26.372 00:10:26.372 14:14:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ cb612wll9fxdp6abwju0a1cbpfbe38mj609wqrj3ldczcrgjtq1pg5mm2oj91o09fkasi92t060p0k1w4ou6ul1e2jk8mc8i9rbfi4gksl5zo6bvj2b9fzrro7gvacxv5hucm8fv9ij5586apvks3ejxwk4khlo9ob7nls8gy5ipm486o2fj7kkddgs93neckdjee0n9m4z8ibg6b76ik98wf299olto5venw2hlqqyrfcfozzjj1m6aoo3v784zk3npsxu05ul0h3lhpkrlrz9jbfi8qr8ghvn4xthenyb31mtmtpm5610uzbsrsf6f29qx8g8hpcu2d7p5qywel8rmvf200pormf39853re65iy7ehyy3bazytz8tyzxqunqiuxd0zimfchbv23s0g0enc8bp2rjpbpvxky5x9t29ps7b0426tcgpvbl4qfgrtwkf4plk4i7bsldllojvwfwpzr9bozv380aiq6zsmzbp1vvzjvsqgg8xdpmz5lc5q == \c\b\6\1\2\w\l\l\9\f\x\d\p\6\a\b\w\j\u\0\a\1\c\b\p\f\b\e\3\8\m\j\6\0\9\w\q\r\j\3\l\d\c\z\c\r\g\j\t\q\1\p\g\5\m\m\2\o\j\9\1\o\0\9\f\k\a\s\i\9\2\t\0\6\0\p\0\k\1\w\4\o\u\6\u\l\1\e\2\j\k\8\m\c\8\i\9\r\b\f\i\4\g\k\s\l\5\z\o\6\b\v\j\2\b\9\f\z\r\r\o\7\g\v\a\c\x\v\5\h\u\c\m\8\f\v\9\i\j\5\5\8\6\a\p\v\k\s\3\e\j\x\w\k\4\k\h\l\o\9\o\b\7\n\l\s\8\g\y\5\i\p\m\4\8\6\o\2\f\j\7\k\k\d\d\g\s\9\3\n\e\c\k\d\j\e\e\0\n\9\m\4\z\8\i\b\g\6\b\7\6\i\k\9\8\w\f\2\9\9\o\l\t\o\5\v\e\n\w\2\h\l\q\q\y\r\f\c\f\o\z\z\j\j\1\m\6\a\o\o\3\v\7\8\4\z\k\3\n\p\s\x\u\0\5\u\l\0\h\3\l\h\p\k\r\l\r\z\9\j\b\f\i\8\q\r\8\g\h\v\n\4\x\t\h\e\n\y\b\3\1\m\t\m\t\p\m\5\6\1\0\u\z\b\s\r\s\f\6\f\2\9\q\x\8\g\8\h\p\c\u\2\d\7\p\5\q\y\w\e\l\8\r\m\v\f\2\0\0\p\o\r\m\f\3\9\8\5\3\r\e\6\5\i\y\7\e\h\y\y\3\b\a\z\y\t\z\8\t\y\z\x\q\u\n\q\i\u\x\d\0\z\i\m\f\c\h\b\v\2\3\s\0\g\0\e\n\c\8\b\p\2\r\j\p\b\p\v\x\k\y\5\x\9\t\2\9\p\s\7\b\0\4\2\6\t\c\g\p\v\b\l\4\q\f\g\r\t\w\k\f\4\p\l\k\4\i\7\b\s\l\d\l\l\o\j\v\w\f\w\p\z\r\9\b\o\z\v\3\8\0\a\i\q\6\z\s\m\z\b\p\1\v\v\z\j\v\s\q\g\g\8\x\d\p\m\z\5\l\c\5\q ]] 00:10:26.372 14:14:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:10:26.372 14:14:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:10:26.372 [2024-11-06 14:14:53.807531] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:10:26.372 [2024-11-06 14:14:53.807752] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62931 ] 00:10:26.630 [2024-11-06 14:14:54.012333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:26.630 [2024-11-06 14:14:54.168695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:26.888 [2024-11-06 14:14:54.421412] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:27.147  [2024-11-06T14:14:56.156Z] Copying: 512/512 [B] (average 250 kBps) 00:10:28.521 00:10:28.521 14:14:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ cb612wll9fxdp6abwju0a1cbpfbe38mj609wqrj3ldczcrgjtq1pg5mm2oj91o09fkasi92t060p0k1w4ou6ul1e2jk8mc8i9rbfi4gksl5zo6bvj2b9fzrro7gvacxv5hucm8fv9ij5586apvks3ejxwk4khlo9ob7nls8gy5ipm486o2fj7kkddgs93neckdjee0n9m4z8ibg6b76ik98wf299olto5venw2hlqqyrfcfozzjj1m6aoo3v784zk3npsxu05ul0h3lhpkrlrz9jbfi8qr8ghvn4xthenyb31mtmtpm5610uzbsrsf6f29qx8g8hpcu2d7p5qywel8rmvf200pormf39853re65iy7ehyy3bazytz8tyzxqunqiuxd0zimfchbv23s0g0enc8bp2rjpbpvxky5x9t29ps7b0426tcgpvbl4qfgrtwkf4plk4i7bsldllojvwfwpzr9bozv380aiq6zsmzbp1vvzjvsqgg8xdpmz5lc5q == \c\b\6\1\2\w\l\l\9\f\x\d\p\6\a\b\w\j\u\0\a\1\c\b\p\f\b\e\3\8\m\j\6\0\9\w\q\r\j\3\l\d\c\z\c\r\g\j\t\q\1\p\g\5\m\m\2\o\j\9\1\o\0\9\f\k\a\s\i\9\2\t\0\6\0\p\0\k\1\w\4\o\u\6\u\l\1\e\2\j\k\8\m\c\8\i\9\r\b\f\i\4\g\k\s\l\5\z\o\6\b\v\j\2\b\9\f\z\r\r\o\7\g\v\a\c\x\v\5\h\u\c\m\8\f\v\9\i\j\5\5\8\6\a\p\v\k\s\3\e\j\x\w\k\4\k\h\l\o\9\o\b\7\n\l\s\8\g\y\5\i\p\m\4\8\6\o\2\f\j\7\k\k\d\d\g\s\9\3\n\e\c\k\d\j\e\e\0\n\9\m\4\z\8\i\b\g\6\b\7\6\i\k\9\8\w\f\2\9\9\o\l\t\o\5\v\e\n\w\2\h\l\q\q\y\r\f\c\f\o\z\z\j\j\1\m\6\a\o\o\3\v\7\8\4\z\k\3\n\p\s\x\u\0\5\u\l\0\h\3\l\h\p\k\r\l\r\z\9\j\b\f\i\8\q\r\8\g\h\v\n\4\x\t\h\e\n\y\b\3\1\m\t\m\t\p\m\5\6\1\0\u\z\b\s\r\s\f\6\f\2\9\q\x\8\g\8\h\p\c\u\2\d\7\p\5\q\y\w\e\l\8\r\m\v\f\2\0\0\p\o\r\m\f\3\9\8\5\3\r\e\6\5\i\y\7\e\h\y\y\3\b\a\z\y\t\z\8\t\y\z\x\q\u\n\q\i\u\x\d\0\z\i\m\f\c\h\b\v\2\3\s\0\g\0\e\n\c\8\b\p\2\r\j\p\b\p\v\x\k\y\5\x\9\t\2\9\p\s\7\b\0\4\2\6\t\c\g\p\v\b\l\4\q\f\g\r\t\w\k\f\4\p\l\k\4\i\7\b\s\l\d\l\l\o\j\v\w\f\w\p\z\r\9\b\o\z\v\3\8\0\a\i\q\6\z\s\m\z\b\p\1\v\v\z\j\v\s\q\g\g\8\x\d\p\m\z\5\l\c\5\q ]] 00:10:28.521 14:14:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:10:28.521 14:14:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:10:28.521 [2024-11-06 14:14:55.915529] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:10:28.521 [2024-11-06 14:14:55.915694] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62956 ] 00:10:28.521 [2024-11-06 14:14:56.107805] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:28.779 [2024-11-06 14:14:56.241038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.038 [2024-11-06 14:14:56.475504] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:29.297  [2024-11-06T14:14:58.308Z] Copying: 512/512 [B] (average 5626 Bps) 00:10:30.673 00:10:30.673 14:14:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ cb612wll9fxdp6abwju0a1cbpfbe38mj609wqrj3ldczcrgjtq1pg5mm2oj91o09fkasi92t060p0k1w4ou6ul1e2jk8mc8i9rbfi4gksl5zo6bvj2b9fzrro7gvacxv5hucm8fv9ij5586apvks3ejxwk4khlo9ob7nls8gy5ipm486o2fj7kkddgs93neckdjee0n9m4z8ibg6b76ik98wf299olto5venw2hlqqyrfcfozzjj1m6aoo3v784zk3npsxu05ul0h3lhpkrlrz9jbfi8qr8ghvn4xthenyb31mtmtpm5610uzbsrsf6f29qx8g8hpcu2d7p5qywel8rmvf200pormf39853re65iy7ehyy3bazytz8tyzxqunqiuxd0zimfchbv23s0g0enc8bp2rjpbpvxky5x9t29ps7b0426tcgpvbl4qfgrtwkf4plk4i7bsldllojvwfwpzr9bozv380aiq6zsmzbp1vvzjvsqgg8xdpmz5lc5q == \c\b\6\1\2\w\l\l\9\f\x\d\p\6\a\b\w\j\u\0\a\1\c\b\p\f\b\e\3\8\m\j\6\0\9\w\q\r\j\3\l\d\c\z\c\r\g\j\t\q\1\p\g\5\m\m\2\o\j\9\1\o\0\9\f\k\a\s\i\9\2\t\0\6\0\p\0\k\1\w\4\o\u\6\u\l\1\e\2\j\k\8\m\c\8\i\9\r\b\f\i\4\g\k\s\l\5\z\o\6\b\v\j\2\b\9\f\z\r\r\o\7\g\v\a\c\x\v\5\h\u\c\m\8\f\v\9\i\j\5\5\8\6\a\p\v\k\s\3\e\j\x\w\k\4\k\h\l\o\9\o\b\7\n\l\s\8\g\y\5\i\p\m\4\8\6\o\2\f\j\7\k\k\d\d\g\s\9\3\n\e\c\k\d\j\e\e\0\n\9\m\4\z\8\i\b\g\6\b\7\6\i\k\9\8\w\f\2\9\9\o\l\t\o\5\v\e\n\w\2\h\l\q\q\y\r\f\c\f\o\z\z\j\j\1\m\6\a\o\o\3\v\7\8\4\z\k\3\n\p\s\x\u\0\5\u\l\0\h\3\l\h\p\k\r\l\r\z\9\j\b\f\i\8\q\r\8\g\h\v\n\4\x\t\h\e\n\y\b\3\1\m\t\m\t\p\m\5\6\1\0\u\z\b\s\r\s\f\6\f\2\9\q\x\8\g\8\h\p\c\u\2\d\7\p\5\q\y\w\e\l\8\r\m\v\f\2\0\0\p\o\r\m\f\3\9\8\5\3\r\e\6\5\i\y\7\e\h\y\y\3\b\a\z\y\t\z\8\t\y\z\x\q\u\n\q\i\u\x\d\0\z\i\m\f\c\h\b\v\2\3\s\0\g\0\e\n\c\8\b\p\2\r\j\p\b\p\v\x\k\y\5\x\9\t\2\9\p\s\7\b\0\4\2\6\t\c\g\p\v\b\l\4\q\f\g\r\t\w\k\f\4\p\l\k\4\i\7\b\s\l\d\l\l\o\j\v\w\f\w\p\z\r\9\b\o\z\v\3\8\0\a\i\q\6\z\s\m\z\b\p\1\v\v\z\j\v\s\q\g\g\8\x\d\p\m\z\5\l\c\5\q ]] 00:10:30.673 14:14:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:10:30.673 14:14:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:10:30.673 14:14:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:10:30.673 14:14:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:10:30.673 14:14:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:10:30.673 14:14:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:10:30.673 [2024-11-06 14:14:58.066094] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:10:30.673 [2024-11-06 14:14:58.066253] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62981 ] 00:10:30.673 [2024-11-06 14:14:58.256670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:30.935 [2024-11-06 14:14:58.387748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:31.196 [2024-11-06 14:14:58.620760] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:31.196  [2024-11-06T14:15:00.209Z] Copying: 512/512 [B] (average 500 kBps) 00:10:32.574 00:10:32.574 14:14:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ gniqio1zc31lpkiegkslo48qpruf6gt6hvywny9c7ruyu0hvyyy9j3jqmnckdmafr4msgw93by50gs0zh1ayusrxpg6opjtpi8ypo6axzxll4gyprwcm0ipzlusyyx0ywmolktp1ti2hu5wn8hitd06rv2cdxnalgqh1ag056qm3f17lam6jj1gcsmidangtodgdiay7cto9hhn43psg8q8e4pcgc6l886l4udimsrdpgmuxlxslicnfxwclj6kl7uyqx8x1ezvjkq4b4od3wyjpwgk9b4rfwg4rzwhun2ak61eu91o1byodmppdbuuegl1qz5k8v67es1f7bdst0zke2s2a2b491x2oor7cwxnrv6yeye19eyp2j9aqynsw3cmf18ob376ver9nerccusdwhp8dur4de9ftydeivctozna3qmlw5ggwsax04z2tl1u3q16xsif9fm0q97cbpy0b8bmywxm33ddc7ab9zz111fy1lk5rskwoz1ggq6ht == \g\n\i\q\i\o\1\z\c\3\1\l\p\k\i\e\g\k\s\l\o\4\8\q\p\r\u\f\6\g\t\6\h\v\y\w\n\y\9\c\7\r\u\y\u\0\h\v\y\y\y\9\j\3\j\q\m\n\c\k\d\m\a\f\r\4\m\s\g\w\9\3\b\y\5\0\g\s\0\z\h\1\a\y\u\s\r\x\p\g\6\o\p\j\t\p\i\8\y\p\o\6\a\x\z\x\l\l\4\g\y\p\r\w\c\m\0\i\p\z\l\u\s\y\y\x\0\y\w\m\o\l\k\t\p\1\t\i\2\h\u\5\w\n\8\h\i\t\d\0\6\r\v\2\c\d\x\n\a\l\g\q\h\1\a\g\0\5\6\q\m\3\f\1\7\l\a\m\6\j\j\1\g\c\s\m\i\d\a\n\g\t\o\d\g\d\i\a\y\7\c\t\o\9\h\h\n\4\3\p\s\g\8\q\8\e\4\p\c\g\c\6\l\8\8\6\l\4\u\d\i\m\s\r\d\p\g\m\u\x\l\x\s\l\i\c\n\f\x\w\c\l\j\6\k\l\7\u\y\q\x\8\x\1\e\z\v\j\k\q\4\b\4\o\d\3\w\y\j\p\w\g\k\9\b\4\r\f\w\g\4\r\z\w\h\u\n\2\a\k\6\1\e\u\9\1\o\1\b\y\o\d\m\p\p\d\b\u\u\e\g\l\1\q\z\5\k\8\v\6\7\e\s\1\f\7\b\d\s\t\0\z\k\e\2\s\2\a\2\b\4\9\1\x\2\o\o\r\7\c\w\x\n\r\v\6\y\e\y\e\1\9\e\y\p\2\j\9\a\q\y\n\s\w\3\c\m\f\1\8\o\b\3\7\6\v\e\r\9\n\e\r\c\c\u\s\d\w\h\p\8\d\u\r\4\d\e\9\f\t\y\d\e\i\v\c\t\o\z\n\a\3\q\m\l\w\5\g\g\w\s\a\x\0\4\z\2\t\l\1\u\3\q\1\6\x\s\i\f\9\f\m\0\q\9\7\c\b\p\y\0\b\8\b\m\y\w\x\m\3\3\d\d\c\7\a\b\9\z\z\1\1\1\f\y\1\l\k\5\r\s\k\w\o\z\1\g\g\q\6\h\t ]] 00:10:32.574 14:14:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:10:32.574 14:14:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:10:32.574 [2024-11-06 14:15:00.075416] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:10:32.574 [2024-11-06 14:15:00.075581] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63007 ] 00:10:32.832 [2024-11-06 14:15:00.262440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:32.832 [2024-11-06 14:15:00.400123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.090 [2024-11-06 14:15:00.630189] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:33.349  [2024-11-06T14:15:02.361Z] Copying: 512/512 [B] (average 500 kBps) 00:10:34.726 00:10:34.726 14:15:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ gniqio1zc31lpkiegkslo48qpruf6gt6hvywny9c7ruyu0hvyyy9j3jqmnckdmafr4msgw93by50gs0zh1ayusrxpg6opjtpi8ypo6axzxll4gyprwcm0ipzlusyyx0ywmolktp1ti2hu5wn8hitd06rv2cdxnalgqh1ag056qm3f17lam6jj1gcsmidangtodgdiay7cto9hhn43psg8q8e4pcgc6l886l4udimsrdpgmuxlxslicnfxwclj6kl7uyqx8x1ezvjkq4b4od3wyjpwgk9b4rfwg4rzwhun2ak61eu91o1byodmppdbuuegl1qz5k8v67es1f7bdst0zke2s2a2b491x2oor7cwxnrv6yeye19eyp2j9aqynsw3cmf18ob376ver9nerccusdwhp8dur4de9ftydeivctozna3qmlw5ggwsax04z2tl1u3q16xsif9fm0q97cbpy0b8bmywxm33ddc7ab9zz111fy1lk5rskwoz1ggq6ht == \g\n\i\q\i\o\1\z\c\3\1\l\p\k\i\e\g\k\s\l\o\4\8\q\p\r\u\f\6\g\t\6\h\v\y\w\n\y\9\c\7\r\u\y\u\0\h\v\y\y\y\9\j\3\j\q\m\n\c\k\d\m\a\f\r\4\m\s\g\w\9\3\b\y\5\0\g\s\0\z\h\1\a\y\u\s\r\x\p\g\6\o\p\j\t\p\i\8\y\p\o\6\a\x\z\x\l\l\4\g\y\p\r\w\c\m\0\i\p\z\l\u\s\y\y\x\0\y\w\m\o\l\k\t\p\1\t\i\2\h\u\5\w\n\8\h\i\t\d\0\6\r\v\2\c\d\x\n\a\l\g\q\h\1\a\g\0\5\6\q\m\3\f\1\7\l\a\m\6\j\j\1\g\c\s\m\i\d\a\n\g\t\o\d\g\d\i\a\y\7\c\t\o\9\h\h\n\4\3\p\s\g\8\q\8\e\4\p\c\g\c\6\l\8\8\6\l\4\u\d\i\m\s\r\d\p\g\m\u\x\l\x\s\l\i\c\n\f\x\w\c\l\j\6\k\l\7\u\y\q\x\8\x\1\e\z\v\j\k\q\4\b\4\o\d\3\w\y\j\p\w\g\k\9\b\4\r\f\w\g\4\r\z\w\h\u\n\2\a\k\6\1\e\u\9\1\o\1\b\y\o\d\m\p\p\d\b\u\u\e\g\l\1\q\z\5\k\8\v\6\7\e\s\1\f\7\b\d\s\t\0\z\k\e\2\s\2\a\2\b\4\9\1\x\2\o\o\r\7\c\w\x\n\r\v\6\y\e\y\e\1\9\e\y\p\2\j\9\a\q\y\n\s\w\3\c\m\f\1\8\o\b\3\7\6\v\e\r\9\n\e\r\c\c\u\s\d\w\h\p\8\d\u\r\4\d\e\9\f\t\y\d\e\i\v\c\t\o\z\n\a\3\q\m\l\w\5\g\g\w\s\a\x\0\4\z\2\t\l\1\u\3\q\1\6\x\s\i\f\9\f\m\0\q\9\7\c\b\p\y\0\b\8\b\m\y\w\x\m\3\3\d\d\c\7\a\b\9\z\z\1\1\1\f\y\1\l\k\5\r\s\k\w\o\z\1\g\g\q\6\h\t ]] 00:10:34.726 14:15:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:10:34.726 14:15:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:10:34.726 [2024-11-06 14:15:02.093813] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:10:34.726 [2024-11-06 14:15:02.093984] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63031 ] 00:10:34.726 [2024-11-06 14:15:02.284424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:34.985 [2024-11-06 14:15:02.417564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:35.243 [2024-11-06 14:15:02.651326] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:35.243  [2024-11-06T14:15:04.253Z] Copying: 512/512 [B] (average 250 kBps) 00:10:36.618 00:10:36.618 14:15:04 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ gniqio1zc31lpkiegkslo48qpruf6gt6hvywny9c7ruyu0hvyyy9j3jqmnckdmafr4msgw93by50gs0zh1ayusrxpg6opjtpi8ypo6axzxll4gyprwcm0ipzlusyyx0ywmolktp1ti2hu5wn8hitd06rv2cdxnalgqh1ag056qm3f17lam6jj1gcsmidangtodgdiay7cto9hhn43psg8q8e4pcgc6l886l4udimsrdpgmuxlxslicnfxwclj6kl7uyqx8x1ezvjkq4b4od3wyjpwgk9b4rfwg4rzwhun2ak61eu91o1byodmppdbuuegl1qz5k8v67es1f7bdst0zke2s2a2b491x2oor7cwxnrv6yeye19eyp2j9aqynsw3cmf18ob376ver9nerccusdwhp8dur4de9ftydeivctozna3qmlw5ggwsax04z2tl1u3q16xsif9fm0q97cbpy0b8bmywxm33ddc7ab9zz111fy1lk5rskwoz1ggq6ht == \g\n\i\q\i\o\1\z\c\3\1\l\p\k\i\e\g\k\s\l\o\4\8\q\p\r\u\f\6\g\t\6\h\v\y\w\n\y\9\c\7\r\u\y\u\0\h\v\y\y\y\9\j\3\j\q\m\n\c\k\d\m\a\f\r\4\m\s\g\w\9\3\b\y\5\0\g\s\0\z\h\1\a\y\u\s\r\x\p\g\6\o\p\j\t\p\i\8\y\p\o\6\a\x\z\x\l\l\4\g\y\p\r\w\c\m\0\i\p\z\l\u\s\y\y\x\0\y\w\m\o\l\k\t\p\1\t\i\2\h\u\5\w\n\8\h\i\t\d\0\6\r\v\2\c\d\x\n\a\l\g\q\h\1\a\g\0\5\6\q\m\3\f\1\7\l\a\m\6\j\j\1\g\c\s\m\i\d\a\n\g\t\o\d\g\d\i\a\y\7\c\t\o\9\h\h\n\4\3\p\s\g\8\q\8\e\4\p\c\g\c\6\l\8\8\6\l\4\u\d\i\m\s\r\d\p\g\m\u\x\l\x\s\l\i\c\n\f\x\w\c\l\j\6\k\l\7\u\y\q\x\8\x\1\e\z\v\j\k\q\4\b\4\o\d\3\w\y\j\p\w\g\k\9\b\4\r\f\w\g\4\r\z\w\h\u\n\2\a\k\6\1\e\u\9\1\o\1\b\y\o\d\m\p\p\d\b\u\u\e\g\l\1\q\z\5\k\8\v\6\7\e\s\1\f\7\b\d\s\t\0\z\k\e\2\s\2\a\2\b\4\9\1\x\2\o\o\r\7\c\w\x\n\r\v\6\y\e\y\e\1\9\e\y\p\2\j\9\a\q\y\n\s\w\3\c\m\f\1\8\o\b\3\7\6\v\e\r\9\n\e\r\c\c\u\s\d\w\h\p\8\d\u\r\4\d\e\9\f\t\y\d\e\i\v\c\t\o\z\n\a\3\q\m\l\w\5\g\g\w\s\a\x\0\4\z\2\t\l\1\u\3\q\1\6\x\s\i\f\9\f\m\0\q\9\7\c\b\p\y\0\b\8\b\m\y\w\x\m\3\3\d\d\c\7\a\b\9\z\z\1\1\1\f\y\1\l\k\5\r\s\k\w\o\z\1\g\g\q\6\h\t ]] 00:10:36.618 14:15:04 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:10:36.618 14:15:04 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:10:36.618 [2024-11-06 14:15:04.124359] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:10:36.618 [2024-11-06 14:15:04.124515] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63052 ] 00:10:36.877 [2024-11-06 14:15:04.312102] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:36.877 [2024-11-06 14:15:04.445919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.136 [2024-11-06 14:15:04.679785] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:37.395  [2024-11-06T14:15:06.421Z] Copying: 512/512 [B] (average 500 kBps) 00:10:38.786 00:10:38.786 14:15:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ gniqio1zc31lpkiegkslo48qpruf6gt6hvywny9c7ruyu0hvyyy9j3jqmnckdmafr4msgw93by50gs0zh1ayusrxpg6opjtpi8ypo6axzxll4gyprwcm0ipzlusyyx0ywmolktp1ti2hu5wn8hitd06rv2cdxnalgqh1ag056qm3f17lam6jj1gcsmidangtodgdiay7cto9hhn43psg8q8e4pcgc6l886l4udimsrdpgmuxlxslicnfxwclj6kl7uyqx8x1ezvjkq4b4od3wyjpwgk9b4rfwg4rzwhun2ak61eu91o1byodmppdbuuegl1qz5k8v67es1f7bdst0zke2s2a2b491x2oor7cwxnrv6yeye19eyp2j9aqynsw3cmf18ob376ver9nerccusdwhp8dur4de9ftydeivctozna3qmlw5ggwsax04z2tl1u3q16xsif9fm0q97cbpy0b8bmywxm33ddc7ab9zz111fy1lk5rskwoz1ggq6ht == \g\n\i\q\i\o\1\z\c\3\1\l\p\k\i\e\g\k\s\l\o\4\8\q\p\r\u\f\6\g\t\6\h\v\y\w\n\y\9\c\7\r\u\y\u\0\h\v\y\y\y\9\j\3\j\q\m\n\c\k\d\m\a\f\r\4\m\s\g\w\9\3\b\y\5\0\g\s\0\z\h\1\a\y\u\s\r\x\p\g\6\o\p\j\t\p\i\8\y\p\o\6\a\x\z\x\l\l\4\g\y\p\r\w\c\m\0\i\p\z\l\u\s\y\y\x\0\y\w\m\o\l\k\t\p\1\t\i\2\h\u\5\w\n\8\h\i\t\d\0\6\r\v\2\c\d\x\n\a\l\g\q\h\1\a\g\0\5\6\q\m\3\f\1\7\l\a\m\6\j\j\1\g\c\s\m\i\d\a\n\g\t\o\d\g\d\i\a\y\7\c\t\o\9\h\h\n\4\3\p\s\g\8\q\8\e\4\p\c\g\c\6\l\8\8\6\l\4\u\d\i\m\s\r\d\p\g\m\u\x\l\x\s\l\i\c\n\f\x\w\c\l\j\6\k\l\7\u\y\q\x\8\x\1\e\z\v\j\k\q\4\b\4\o\d\3\w\y\j\p\w\g\k\9\b\4\r\f\w\g\4\r\z\w\h\u\n\2\a\k\6\1\e\u\9\1\o\1\b\y\o\d\m\p\p\d\b\u\u\e\g\l\1\q\z\5\k\8\v\6\7\e\s\1\f\7\b\d\s\t\0\z\k\e\2\s\2\a\2\b\4\9\1\x\2\o\o\r\7\c\w\x\n\r\v\6\y\e\y\e\1\9\e\y\p\2\j\9\a\q\y\n\s\w\3\c\m\f\1\8\o\b\3\7\6\v\e\r\9\n\e\r\c\c\u\s\d\w\h\p\8\d\u\r\4\d\e\9\f\t\y\d\e\i\v\c\t\o\z\n\a\3\q\m\l\w\5\g\g\w\s\a\x\0\4\z\2\t\l\1\u\3\q\1\6\x\s\i\f\9\f\m\0\q\9\7\c\b\p\y\0\b\8\b\m\y\w\x\m\3\3\d\d\c\7\a\b\9\z\z\1\1\1\f\y\1\l\k\5\r\s\k\w\o\z\1\g\g\q\6\h\t ]] 00:10:38.786 00:10:38.786 real 0m16.505s 00:10:38.786 user 0m13.252s 00:10:38.786 sys 0m2.143s 00:10:38.786 14:15:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:38.786 ************************************ 00:10:38.786 END TEST dd_flags_misc_forced_aio 00:10:38.786 14:15:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:10:38.786 ************************************ 00:10:38.786 14:15:06 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:10:38.786 14:15:06 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:10:38.786 14:15:06 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:10:38.786 00:10:38.786 real 1m8.026s 00:10:38.786 user 0m52.773s 00:10:38.786 sys 0m20.198s 00:10:38.786 14:15:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:38.786 14:15:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:10:38.786 ************************************ 00:10:38.786 END TEST spdk_dd_posix 00:10:38.786 ************************************ 00:10:38.786 14:15:06 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:10:38.786 14:15:06 spdk_dd -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:38.786 14:15:06 spdk_dd -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:38.786 14:15:06 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:10:38.787 ************************************ 00:10:38.787 START TEST spdk_dd_malloc 00:10:38.787 ************************************ 00:10:38.787 14:15:06 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:10:38.787 * Looking for test storage... 00:10:38.787 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:10:38.787 14:15:06 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:38.787 14:15:06 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1691 -- # lcov --version 00:10:38.787 14:15:06 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:38.787 14:15:06 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:38.787 14:15:06 spdk_dd.spdk_dd_malloc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:38.787 14:15:06 spdk_dd.spdk_dd_malloc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:38.787 14:15:06 spdk_dd.spdk_dd_malloc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:38.787 14:15:06 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # IFS=.-: 00:10:38.787 14:15:06 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # read -ra ver1 00:10:38.787 14:15:06 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # IFS=.-: 00:10:38.787 14:15:06 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # read -ra ver2 00:10:38.787 14:15:06 spdk_dd.spdk_dd_malloc -- scripts/common.sh@338 -- # local 'op=<' 00:10:38.787 14:15:06 spdk_dd.spdk_dd_malloc -- scripts/common.sh@340 -- # ver1_l=2 00:10:38.787 14:15:06 spdk_dd.spdk_dd_malloc -- scripts/common.sh@341 -- # ver2_l=1 00:10:38.787 14:15:06 spdk_dd.spdk_dd_malloc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:38.787 14:15:06 spdk_dd.spdk_dd_malloc -- scripts/common.sh@344 -- # case "$op" in 00:10:38.787 14:15:06 spdk_dd.spdk_dd_malloc -- scripts/common.sh@345 -- # : 1 00:10:38.787 14:15:06 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:38.787 14:15:06 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:38.787 14:15:06 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # decimal 1 00:10:38.787 14:15:06 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=1 00:10:38.787 14:15:06 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:38.787 14:15:06 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 1 00:10:38.787 14:15:06 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:38.787 14:15:06 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # decimal 2 00:10:38.787 14:15:06 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=2 00:10:38.787 14:15:06 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:38.787 14:15:06 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 2 00:10:38.787 14:15:06 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:38.787 14:15:06 spdk_dd.spdk_dd_malloc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:38.787 14:15:06 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:38.787 14:15:06 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # return 0 00:10:38.787 14:15:06 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:38.787 14:15:06 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:38.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.787 --rc genhtml_branch_coverage=1 00:10:38.787 --rc genhtml_function_coverage=1 00:10:38.787 --rc genhtml_legend=1 00:10:38.787 --rc geninfo_all_blocks=1 00:10:38.787 --rc geninfo_unexecuted_blocks=1 00:10:38.787 00:10:38.787 ' 00:10:38.787 14:15:06 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:38.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.787 --rc genhtml_branch_coverage=1 00:10:38.787 --rc genhtml_function_coverage=1 00:10:38.787 --rc genhtml_legend=1 00:10:38.787 --rc geninfo_all_blocks=1 00:10:38.787 --rc geninfo_unexecuted_blocks=1 00:10:38.787 00:10:38.787 ' 00:10:39.046 14:15:06 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:39.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:39.046 --rc genhtml_branch_coverage=1 00:10:39.046 --rc genhtml_function_coverage=1 00:10:39.046 --rc genhtml_legend=1 00:10:39.046 --rc geninfo_all_blocks=1 00:10:39.046 --rc geninfo_unexecuted_blocks=1 00:10:39.046 00:10:39.046 ' 00:10:39.046 14:15:06 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:39.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:39.046 --rc genhtml_branch_coverage=1 00:10:39.046 --rc genhtml_function_coverage=1 00:10:39.046 --rc genhtml_legend=1 00:10:39.046 --rc geninfo_all_blocks=1 00:10:39.046 --rc geninfo_unexecuted_blocks=1 00:10:39.046 00:10:39.046 ' 00:10:39.046 14:15:06 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:39.046 14:15:06 spdk_dd.spdk_dd_malloc -- scripts/common.sh@15 -- # shopt -s extglob 00:10:39.046 14:15:06 spdk_dd.spdk_dd_malloc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:39.046 14:15:06 spdk_dd.spdk_dd_malloc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:39.046 14:15:06 spdk_dd.spdk_dd_malloc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:39.046 14:15:06 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.046 14:15:06 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.046 14:15:06 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.046 14:15:06 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:10:39.046 14:15:06 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.046 14:15:06 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:10:39.046 14:15:06 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:39.046 14:15:06 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:39.046 14:15:06 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:10:39.046 ************************************ 00:10:39.046 START TEST dd_malloc_copy 00:10:39.046 ************************************ 00:10:39.046 14:15:06 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1127 -- # malloc_copy 00:10:39.046 14:15:06 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:10:39.046 14:15:06 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:10:39.046 14:15:06 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:10:39.046 14:15:06 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:10:39.046 14:15:06 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:10:39.046 14:15:06 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:10:39.047 14:15:06 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:10:39.047 14:15:06 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:10:39.047 14:15:06 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:10:39.047 14:15:06 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:10:39.047 { 00:10:39.047 "subsystems": [ 00:10:39.047 { 00:10:39.047 "subsystem": "bdev", 00:10:39.047 "config": [ 00:10:39.047 { 00:10:39.047 "params": { 00:10:39.047 "block_size": 512, 00:10:39.047 "num_blocks": 1048576, 00:10:39.047 "name": "malloc0" 00:10:39.047 }, 00:10:39.047 "method": "bdev_malloc_create" 00:10:39.047 }, 00:10:39.047 { 00:10:39.047 "params": { 00:10:39.047 "block_size": 512, 00:10:39.047 "num_blocks": 1048576, 00:10:39.047 "name": "malloc1" 00:10:39.047 }, 00:10:39.047 "method": "bdev_malloc_create" 00:10:39.047 }, 00:10:39.047 { 00:10:39.047 "method": "bdev_wait_for_examine" 00:10:39.047 } 00:10:39.047 ] 00:10:39.047 } 00:10:39.047 ] 00:10:39.047 } 00:10:39.047 [2024-11-06 14:15:06.558094] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:10:39.047 [2024-11-06 14:15:06.558244] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63151 ] 00:10:39.304 [2024-11-06 14:15:06.742371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:39.304 [2024-11-06 14:15:06.872010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.562 [2024-11-06 14:15:07.108741] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:42.094  [2024-11-06T14:15:10.661Z] Copying: 205/512 [MB] (205 MBps) [2024-11-06T14:15:11.227Z] Copying: 410/512 [MB] (204 MBps) [2024-11-06T14:15:15.414Z] Copying: 512/512 [MB] (average 204 MBps) 00:10:47.779 00:10:47.779 14:15:15 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:10:47.779 14:15:15 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:10:47.779 14:15:15 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:10:47.779 14:15:15 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:10:47.779 { 00:10:47.779 "subsystems": [ 00:10:47.779 { 00:10:47.779 "subsystem": "bdev", 00:10:47.779 "config": [ 00:10:47.779 { 00:10:47.779 "params": { 00:10:47.779 "block_size": 512, 00:10:47.779 "num_blocks": 1048576, 00:10:47.779 "name": "malloc0" 00:10:47.779 }, 00:10:47.779 "method": "bdev_malloc_create" 00:10:47.779 }, 00:10:47.779 { 00:10:47.779 "params": { 00:10:47.779 "block_size": 512, 00:10:47.779 "num_blocks": 1048576, 00:10:47.779 "name": "malloc1" 00:10:47.779 }, 00:10:47.779 "method": "bdev_malloc_create" 00:10:47.779 }, 00:10:47.779 { 00:10:47.779 "method": "bdev_wait_for_examine" 00:10:47.779 } 00:10:47.779 ] 00:10:47.779 } 00:10:47.779 ] 00:10:47.779 } 00:10:48.037 [2024-11-06 14:15:15.427729] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:10:48.037 [2024-11-06 14:15:15.427939] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63255 ] 00:10:48.037 [2024-11-06 14:15:15.624796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:48.296 [2024-11-06 14:15:15.774469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.555 [2024-11-06 14:15:16.020499] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:51.084  [2024-11-06T14:15:19.667Z] Copying: 204/512 [MB] (204 MBps) [2024-11-06T14:15:19.926Z] Copying: 408/512 [MB] (203 MBps) [2024-11-06T14:15:24.113Z] Copying: 512/512 [MB] (average 204 MBps) 00:10:56.478 00:10:56.478 00:10:56.478 real 0m17.648s 00:10:56.478 user 0m16.200s 00:10:56.478 sys 0m1.230s 00:10:56.478 14:15:24 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:56.478 14:15:24 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:10:56.478 ************************************ 00:10:56.478 END TEST dd_malloc_copy 00:10:56.478 ************************************ 00:10:56.736 00:10:56.736 real 0m17.976s 00:10:56.736 user 0m16.372s 00:10:56.736 sys 0m1.399s 00:10:56.736 14:15:24 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:56.736 14:15:24 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:10:56.736 ************************************ 00:10:56.736 END TEST spdk_dd_malloc 00:10:56.736 ************************************ 00:10:56.736 14:15:24 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:10:56.736 14:15:24 spdk_dd -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:56.736 14:15:24 spdk_dd -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:56.736 14:15:24 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:10:56.736 ************************************ 00:10:56.736 START TEST spdk_dd_bdev_to_bdev 00:10:56.736 ************************************ 00:10:56.736 14:15:24 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:10:56.736 * Looking for test storage... 00:10:56.736 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:10:56.736 14:15:24 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:56.736 14:15:24 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1691 -- # lcov --version 00:10:56.736 14:15:24 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:56.995 14:15:24 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:56.995 14:15:24 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:56.995 14:15:24 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:56.995 14:15:24 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:56.995 14:15:24 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # IFS=.-: 00:10:56.995 14:15:24 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # read -ra ver1 00:10:56.995 14:15:24 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # IFS=.-: 00:10:56.995 14:15:24 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # read -ra ver2 00:10:56.995 14:15:24 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@338 -- # local 'op=<' 00:10:56.995 14:15:24 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@340 -- # ver1_l=2 00:10:56.995 14:15:24 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@341 -- # ver2_l=1 00:10:56.995 14:15:24 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:56.995 14:15:24 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@344 -- # case "$op" in 00:10:56.995 14:15:24 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@345 -- # : 1 00:10:56.995 14:15:24 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:56.995 14:15:24 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:56.995 14:15:24 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # decimal 1 00:10:56.995 14:15:24 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=1 00:10:56.995 14:15:24 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:56.995 14:15:24 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 1 00:10:56.995 14:15:24 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # ver1[v]=1 00:10:56.995 14:15:24 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # decimal 2 00:10:56.995 14:15:24 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=2 00:10:56.995 14:15:24 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:56.995 14:15:24 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 2 00:10:56.995 14:15:24 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # ver2[v]=2 00:10:56.995 14:15:24 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:56.995 14:15:24 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:56.995 14:15:24 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # return 0 00:10:56.995 14:15:24 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:56.995 14:15:24 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:56.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:56.995 --rc genhtml_branch_coverage=1 00:10:56.995 --rc genhtml_function_coverage=1 00:10:56.995 --rc genhtml_legend=1 00:10:56.995 --rc geninfo_all_blocks=1 00:10:56.995 --rc geninfo_unexecuted_blocks=1 00:10:56.995 00:10:56.995 ' 00:10:56.995 14:15:24 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:56.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:56.995 --rc genhtml_branch_coverage=1 00:10:56.995 --rc genhtml_function_coverage=1 00:10:56.995 --rc genhtml_legend=1 00:10:56.995 --rc geninfo_all_blocks=1 00:10:56.995 --rc geninfo_unexecuted_blocks=1 00:10:56.995 00:10:56.995 ' 00:10:56.995 14:15:24 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:56.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:56.995 --rc genhtml_branch_coverage=1 00:10:56.995 --rc genhtml_function_coverage=1 00:10:56.995 --rc genhtml_legend=1 00:10:56.995 --rc geninfo_all_blocks=1 00:10:56.995 --rc geninfo_unexecuted_blocks=1 00:10:56.995 00:10:56.995 ' 00:10:56.995 14:15:24 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:56.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:56.995 --rc genhtml_branch_coverage=1 00:10:56.995 --rc genhtml_function_coverage=1 00:10:56.995 --rc genhtml_legend=1 00:10:56.995 --rc geninfo_all_blocks=1 00:10:56.995 --rc geninfo_unexecuted_blocks=1 00:10:56.995 00:10:56.995 ' 00:10:56.995 14:15:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:56.995 14:15:24 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@15 -- # shopt -s extglob 00:10:56.995 14:15:24 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:56.995 14:15:24 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:56.995 14:15:24 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:56.995 14:15:24 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.995 14:15:24 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.995 14:15:24 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.995 14:15:24 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:10:56.995 14:15:24 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.995 14:15:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:10:56.995 14:15:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:10:56.995 14:15:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:10:56.995 14:15:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:10:56.995 14:15:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:10:56.995 14:15:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:10:56.995 14:15:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:10:56.995 14:15:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:10:56.995 14:15:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:10:56.995 14:15:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:10:56.996 14:15:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:10:56.996 14:15:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:10:56.996 14:15:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:10:56.996 14:15:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:10:56.996 14:15:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:56.996 14:15:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:56.996 14:15:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:10:56.996 14:15:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:10:56.996 14:15:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:10:56.996 14:15:24 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:10:56.996 14:15:24 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:56.996 14:15:24 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:10:56.996 ************************************ 00:10:56.996 START TEST dd_inflate_file 00:10:56.996 ************************************ 00:10:56.996 14:15:24 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:10:56.996 [2024-11-06 14:15:24.592763] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:10:56.996 [2024-11-06 14:15:24.592981] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63424 ] 00:10:57.253 [2024-11-06 14:15:24.784434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:57.509 [2024-11-06 14:15:24.919499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.765 [2024-11-06 14:15:25.148633] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:57.765  [2024-11-06T14:15:26.773Z] Copying: 64/64 [MB] (average 1306 MBps) 00:10:59.138 00:10:59.138 00:10:59.138 real 0m2.052s 00:10:59.138 user 0m1.674s 00:10:59.138 sys 0m1.255s 00:10:59.138 14:15:26 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:59.138 14:15:26 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:10:59.138 ************************************ 00:10:59.138 END TEST dd_inflate_file 00:10:59.138 ************************************ 00:10:59.139 14:15:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:10:59.139 14:15:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:10:59.139 14:15:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:10:59.139 14:15:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:10:59.139 14:15:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:10:59.139 14:15:26 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:10:59.139 14:15:26 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:10:59.139 14:15:26 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:59.139 14:15:26 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:10:59.139 ************************************ 00:10:59.139 START TEST dd_copy_to_out_bdev 00:10:59.139 ************************************ 00:10:59.139 14:15:26 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:10:59.139 { 00:10:59.139 "subsystems": [ 00:10:59.139 { 00:10:59.139 "subsystem": "bdev", 00:10:59.139 "config": [ 00:10:59.139 { 00:10:59.139 "params": { 00:10:59.139 "trtype": "pcie", 00:10:59.139 "traddr": "0000:00:10.0", 00:10:59.139 "name": "Nvme0" 00:10:59.139 }, 00:10:59.139 "method": "bdev_nvme_attach_controller" 00:10:59.139 }, 00:10:59.139 { 00:10:59.139 "params": { 00:10:59.139 "trtype": "pcie", 00:10:59.139 "traddr": "0000:00:11.0", 00:10:59.139 "name": "Nvme1" 00:10:59.139 }, 00:10:59.139 "method": "bdev_nvme_attach_controller" 00:10:59.139 }, 00:10:59.139 { 00:10:59.139 "method": "bdev_wait_for_examine" 00:10:59.139 } 00:10:59.139 ] 00:10:59.139 } 00:10:59.139 ] 00:10:59.139 } 00:10:59.139 [2024-11-06 14:15:26.728156] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:10:59.139 [2024-11-06 14:15:26.728324] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63475 ] 00:10:59.397 [2024-11-06 14:15:26.919335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:59.656 [2024-11-06 14:15:27.049459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:59.656 [2024-11-06 14:15:27.267147] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:01.029  [2024-11-06T14:15:30.039Z] Copying: 64/64 [MB] (average 69 MBps) 00:11:02.404 00:11:02.404 00:11:02.404 real 0m3.101s 00:11:02.404 user 0m2.726s 00:11:02.404 sys 0m2.187s 00:11:02.404 14:15:29 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:02.404 ************************************ 00:11:02.404 END TEST dd_copy_to_out_bdev 00:11:02.404 14:15:29 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:11:02.404 ************************************ 00:11:02.404 14:15:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:11:02.404 14:15:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:11:02.404 14:15:29 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:02.404 14:15:29 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:02.404 14:15:29 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:11:02.404 ************************************ 00:11:02.404 START TEST dd_offset_magic 00:11:02.404 ************************************ 00:11:02.404 14:15:29 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1127 -- # offset_magic 00:11:02.404 14:15:29 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:11:02.404 14:15:29 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:11:02.404 14:15:29 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:11:02.404 14:15:29 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:11:02.404 14:15:29 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:11:02.404 14:15:29 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:11:02.404 14:15:29 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:11:02.404 14:15:29 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:11:02.404 { 00:11:02.404 "subsystems": [ 00:11:02.404 { 00:11:02.404 "subsystem": "bdev", 00:11:02.404 "config": [ 00:11:02.404 { 00:11:02.404 "params": { 00:11:02.404 "trtype": "pcie", 00:11:02.404 "traddr": "0000:00:10.0", 00:11:02.404 "name": "Nvme0" 00:11:02.404 }, 00:11:02.404 "method": "bdev_nvme_attach_controller" 00:11:02.404 }, 00:11:02.404 { 00:11:02.404 "params": { 00:11:02.404 "trtype": "pcie", 00:11:02.404 "traddr": "0000:00:11.0", 00:11:02.404 "name": "Nvme1" 00:11:02.404 }, 00:11:02.404 "method": "bdev_nvme_attach_controller" 00:11:02.404 }, 00:11:02.404 { 00:11:02.404 "method": "bdev_wait_for_examine" 00:11:02.404 } 00:11:02.404 ] 00:11:02.404 } 00:11:02.404 ] 00:11:02.404 } 00:11:02.404 [2024-11-06 14:15:29.905617] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:11:02.404 [2024-11-06 14:15:29.905849] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63532 ] 00:11:02.662 [2024-11-06 14:15:30.098649] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:02.662 [2024-11-06 14:15:30.234242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.919 [2024-11-06 14:15:30.466995] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:03.485  [2024-11-06T14:15:32.055Z] Copying: 65/65 [MB] (average 722 MBps) 00:11:04.420 00:11:04.420 14:15:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:11:04.420 14:15:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:11:04.420 14:15:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:11:04.420 14:15:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:11:04.420 { 00:11:04.420 "subsystems": [ 00:11:04.420 { 00:11:04.420 "subsystem": "bdev", 00:11:04.420 "config": [ 00:11:04.420 { 00:11:04.420 "params": { 00:11:04.420 "trtype": "pcie", 00:11:04.420 "traddr": "0000:00:10.0", 00:11:04.420 "name": "Nvme0" 00:11:04.420 }, 00:11:04.420 "method": "bdev_nvme_attach_controller" 00:11:04.420 }, 00:11:04.420 { 00:11:04.420 "params": { 00:11:04.420 "trtype": "pcie", 00:11:04.420 "traddr": "0000:00:11.0", 00:11:04.420 "name": "Nvme1" 00:11:04.420 }, 00:11:04.420 "method": "bdev_nvme_attach_controller" 00:11:04.420 }, 00:11:04.420 { 00:11:04.420 "method": "bdev_wait_for_examine" 00:11:04.420 } 00:11:04.420 ] 00:11:04.420 } 00:11:04.420 ] 00:11:04.420 } 00:11:04.677 [2024-11-06 14:15:32.078956] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:11:04.677 [2024-11-06 14:15:32.079118] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63573 ] 00:11:04.677 [2024-11-06 14:15:32.279445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:04.936 [2024-11-06 14:15:32.413387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:05.193 [2024-11-06 14:15:32.636156] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:05.451  [2024-11-06T14:15:34.464Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:11:06.829 00:11:06.829 14:15:34 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:11:06.829 14:15:34 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:11:06.829 14:15:34 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:11:06.829 14:15:34 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:11:06.829 14:15:34 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:11:06.829 14:15:34 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:11:06.829 14:15:34 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:11:06.829 { 00:11:06.829 "subsystems": [ 00:11:06.829 { 00:11:06.829 "subsystem": "bdev", 00:11:06.829 "config": [ 00:11:06.829 { 00:11:06.829 "params": { 00:11:06.829 "trtype": "pcie", 00:11:06.829 "traddr": "0000:00:10.0", 00:11:06.829 "name": "Nvme0" 00:11:06.829 }, 00:11:06.829 "method": "bdev_nvme_attach_controller" 00:11:06.829 }, 00:11:06.829 { 00:11:06.829 "params": { 00:11:06.829 "trtype": "pcie", 00:11:06.829 "traddr": "0000:00:11.0", 00:11:06.829 "name": "Nvme1" 00:11:06.829 }, 00:11:06.829 "method": "bdev_nvme_attach_controller" 00:11:06.829 }, 00:11:06.829 { 00:11:06.829 "method": "bdev_wait_for_examine" 00:11:06.829 } 00:11:06.829 ] 00:11:06.829 } 00:11:06.829 ] 00:11:06.829 } 00:11:06.829 [2024-11-06 14:15:34.214590] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:11:06.829 [2024-11-06 14:15:34.214752] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63607 ] 00:11:06.829 [2024-11-06 14:15:34.399398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:07.088 [2024-11-06 14:15:34.538354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.347 [2024-11-06 14:15:34.771511] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:07.606  [2024-11-06T14:15:36.176Z] Copying: 65/65 [MB] (average 802 MBps) 00:11:08.541 00:11:08.805 14:15:36 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:11:08.805 14:15:36 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:11:08.805 14:15:36 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:11:08.805 14:15:36 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:11:08.805 { 00:11:08.805 "subsystems": [ 00:11:08.805 { 00:11:08.805 "subsystem": "bdev", 00:11:08.805 "config": [ 00:11:08.805 { 00:11:08.805 "params": { 00:11:08.805 "trtype": "pcie", 00:11:08.805 "traddr": "0000:00:10.0", 00:11:08.805 "name": "Nvme0" 00:11:08.805 }, 00:11:08.805 "method": "bdev_nvme_attach_controller" 00:11:08.805 }, 00:11:08.805 { 00:11:08.805 "params": { 00:11:08.805 "trtype": "pcie", 00:11:08.805 "traddr": "0000:00:11.0", 00:11:08.805 "name": "Nvme1" 00:11:08.805 }, 00:11:08.805 "method": "bdev_nvme_attach_controller" 00:11:08.805 }, 00:11:08.805 { 00:11:08.805 "method": "bdev_wait_for_examine" 00:11:08.805 } 00:11:08.805 ] 00:11:08.805 } 00:11:08.805 ] 00:11:08.805 } 00:11:08.805 [2024-11-06 14:15:36.321164] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:11:08.805 [2024-11-06 14:15:36.321323] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63639 ] 00:11:09.063 [2024-11-06 14:15:36.505275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:09.063 [2024-11-06 14:15:36.631245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.321 [2024-11-06 14:15:36.843906] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:09.579  [2024-11-06T14:15:38.585Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:11:10.950 00:11:10.950 ************************************ 00:11:10.950 END TEST dd_offset_magic 00:11:10.950 ************************************ 00:11:10.950 14:15:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:11:10.950 14:15:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:11:10.950 00:11:10.950 real 0m8.481s 00:11:10.950 user 0m7.016s 00:11:10.950 sys 0m3.025s 00:11:10.950 14:15:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:10.950 14:15:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:11:10.950 14:15:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:11:10.950 14:15:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:11:10.950 14:15:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:11:10.950 14:15:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:11:10.950 14:15:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:11:10.950 14:15:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:11:10.950 14:15:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:11:10.950 14:15:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:11:10.950 14:15:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:11:10.950 14:15:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:11:10.950 14:15:38 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:11:10.950 { 00:11:10.950 "subsystems": [ 00:11:10.950 { 00:11:10.950 "subsystem": "bdev", 00:11:10.950 "config": [ 00:11:10.950 { 00:11:10.950 "params": { 00:11:10.950 "trtype": "pcie", 00:11:10.950 "traddr": "0000:00:10.0", 00:11:10.950 "name": "Nvme0" 00:11:10.950 }, 00:11:10.950 "method": "bdev_nvme_attach_controller" 00:11:10.950 }, 00:11:10.950 { 00:11:10.950 "params": { 00:11:10.950 "trtype": "pcie", 00:11:10.950 "traddr": "0000:00:11.0", 00:11:10.950 "name": "Nvme1" 00:11:10.950 }, 00:11:10.950 "method": "bdev_nvme_attach_controller" 00:11:10.950 }, 00:11:10.950 { 00:11:10.950 "method": "bdev_wait_for_examine" 00:11:10.950 } 00:11:10.950 ] 00:11:10.950 } 00:11:10.950 ] 00:11:10.950 } 00:11:10.950 [2024-11-06 14:15:38.472816] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:11:10.950 [2024-11-06 14:15:38.473028] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63689 ] 00:11:11.208 [2024-11-06 14:15:38.661759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:11.208 [2024-11-06 14:15:38.796816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.466 [2024-11-06 14:15:39.028009] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:11.725  [2024-11-06T14:15:40.736Z] Copying: 5120/5120 [kB] (average 1250 MBps) 00:11:13.101 00:11:13.101 14:15:40 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:11:13.101 14:15:40 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:11:13.101 14:15:40 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:11:13.101 14:15:40 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:11:13.101 14:15:40 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:11:13.101 14:15:40 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:11:13.101 14:15:40 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:11:13.101 14:15:40 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:11:13.101 14:15:40 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:11:13.101 14:15:40 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:11:13.101 { 00:11:13.101 "subsystems": [ 00:11:13.101 { 00:11:13.101 "subsystem": "bdev", 00:11:13.101 "config": [ 00:11:13.101 { 00:11:13.101 "params": { 00:11:13.101 "trtype": "pcie", 00:11:13.101 "traddr": "0000:00:10.0", 00:11:13.101 "name": "Nvme0" 00:11:13.101 }, 00:11:13.101 "method": "bdev_nvme_attach_controller" 00:11:13.101 }, 00:11:13.101 { 00:11:13.101 "params": { 00:11:13.101 "trtype": "pcie", 00:11:13.101 "traddr": "0000:00:11.0", 00:11:13.101 "name": "Nvme1" 00:11:13.101 }, 00:11:13.101 "method": "bdev_nvme_attach_controller" 00:11:13.101 }, 00:11:13.101 { 00:11:13.101 "method": "bdev_wait_for_examine" 00:11:13.101 } 00:11:13.101 ] 00:11:13.101 } 00:11:13.101 ] 00:11:13.101 } 00:11:13.101 [2024-11-06 14:15:40.663397] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:11:13.101 [2024-11-06 14:15:40.663572] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63722 ] 00:11:13.360 [2024-11-06 14:15:40.855190] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:13.360 [2024-11-06 14:15:40.992041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:13.618 [2024-11-06 14:15:41.222355] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:13.912  [2024-11-06T14:15:42.920Z] Copying: 5120/5120 [kB] (average 714 MBps) 00:11:15.285 00:11:15.285 14:15:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:11:15.285 00:11:15.285 real 0m18.554s 00:11:15.285 user 0m15.291s 00:11:15.285 sys 0m9.275s 00:11:15.285 14:15:42 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:15.285 14:15:42 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:11:15.285 ************************************ 00:11:15.285 END TEST spdk_dd_bdev_to_bdev 00:11:15.285 ************************************ 00:11:15.285 14:15:42 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:11:15.285 14:15:42 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:11:15.285 14:15:42 spdk_dd -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:15.285 14:15:42 spdk_dd -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:15.285 14:15:42 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:11:15.285 ************************************ 00:11:15.285 START TEST spdk_dd_uring 00:11:15.285 ************************************ 00:11:15.285 14:15:42 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:11:15.543 * Looking for test storage... 00:11:15.543 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:11:15.543 14:15:43 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:15.543 14:15:43 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1691 -- # lcov --version 00:11:15.543 14:15:43 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:15.543 14:15:43 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:15.543 14:15:43 spdk_dd.spdk_dd_uring -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:15.543 14:15:43 spdk_dd.spdk_dd_uring -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:15.543 14:15:43 spdk_dd.spdk_dd_uring -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:15.543 14:15:43 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # IFS=.-: 00:11:15.543 14:15:43 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # read -ra ver1 00:11:15.543 14:15:43 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # IFS=.-: 00:11:15.543 14:15:43 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # read -ra ver2 00:11:15.543 14:15:43 spdk_dd.spdk_dd_uring -- scripts/common.sh@338 -- # local 'op=<' 00:11:15.543 14:15:43 spdk_dd.spdk_dd_uring -- scripts/common.sh@340 -- # ver1_l=2 00:11:15.543 14:15:43 spdk_dd.spdk_dd_uring -- scripts/common.sh@341 -- # ver2_l=1 00:11:15.543 14:15:43 spdk_dd.spdk_dd_uring -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:15.543 14:15:43 spdk_dd.spdk_dd_uring -- scripts/common.sh@344 -- # case "$op" in 00:11:15.543 14:15:43 spdk_dd.spdk_dd_uring -- scripts/common.sh@345 -- # : 1 00:11:15.543 14:15:43 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:15.543 14:15:43 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:15.543 14:15:43 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # decimal 1 00:11:15.543 14:15:43 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=1 00:11:15.543 14:15:43 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:15.543 14:15:43 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 1 00:11:15.543 14:15:43 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # ver1[v]=1 00:11:15.543 14:15:43 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # decimal 2 00:11:15.543 14:15:43 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=2 00:11:15.543 14:15:43 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:15.543 14:15:43 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 2 00:11:15.543 14:15:43 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # ver2[v]=2 00:11:15.543 14:15:43 spdk_dd.spdk_dd_uring -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:15.543 14:15:43 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:15.543 14:15:43 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # return 0 00:11:15.543 14:15:43 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:15.543 14:15:43 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:15.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.543 --rc genhtml_branch_coverage=1 00:11:15.543 --rc genhtml_function_coverage=1 00:11:15.543 --rc genhtml_legend=1 00:11:15.543 --rc geninfo_all_blocks=1 00:11:15.543 --rc geninfo_unexecuted_blocks=1 00:11:15.543 00:11:15.543 ' 00:11:15.543 14:15:43 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:15.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.543 --rc genhtml_branch_coverage=1 00:11:15.543 --rc genhtml_function_coverage=1 00:11:15.543 --rc genhtml_legend=1 00:11:15.543 --rc geninfo_all_blocks=1 00:11:15.543 --rc geninfo_unexecuted_blocks=1 00:11:15.543 00:11:15.543 ' 00:11:15.543 14:15:43 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:15.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.543 --rc genhtml_branch_coverage=1 00:11:15.543 --rc genhtml_function_coverage=1 00:11:15.543 --rc genhtml_legend=1 00:11:15.543 --rc geninfo_all_blocks=1 00:11:15.543 --rc geninfo_unexecuted_blocks=1 00:11:15.543 00:11:15.543 ' 00:11:15.543 14:15:43 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:15.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.543 --rc genhtml_branch_coverage=1 00:11:15.543 --rc genhtml_function_coverage=1 00:11:15.543 --rc genhtml_legend=1 00:11:15.543 --rc geninfo_all_blocks=1 00:11:15.543 --rc geninfo_unexecuted_blocks=1 00:11:15.543 00:11:15.543 ' 00:11:15.544 14:15:43 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:15.544 14:15:43 spdk_dd.spdk_dd_uring -- scripts/common.sh@15 -- # shopt -s extglob 00:11:15.544 14:15:43 spdk_dd.spdk_dd_uring -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:15.544 14:15:43 spdk_dd.spdk_dd_uring -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:15.544 14:15:43 spdk_dd.spdk_dd_uring -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:15.544 14:15:43 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.544 14:15:43 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.544 14:15:43 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.544 14:15:43 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:11:15.544 14:15:43 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.544 14:15:43 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:11:15.544 14:15:43 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:15.544 14:15:43 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:15.544 14:15:43 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:11:15.544 ************************************ 00:11:15.544 START TEST dd_uring_copy 00:11:15.544 ************************************ 00:11:15.544 14:15:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1127 -- # uring_zram_copy 00:11:15.544 14:15:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:11:15.544 14:15:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:11:15.544 14:15:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:11:15.544 14:15:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:11:15.544 14:15:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:11:15.544 14:15:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:11:15.544 14:15:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:11:15.544 14:15:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:11:15.544 14:15:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:11:15.544 14:15:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:11:15.544 14:15:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:11:15.544 14:15:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:11:15.544 14:15:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:11:15.544 14:15:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:11:15.544 14:15:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:11:15.544 14:15:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:11:15.544 14:15:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:11:15.544 14:15:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:11:15.544 14:15:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:11:15.544 14:15:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:11:15.544 14:15:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:11:15.544 14:15:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:11:15.544 14:15:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:11:15.544 14:15:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:11:15.544 14:15:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:11:15.802 14:15:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=y2vjzjhkzx3m1tsce2po9z4o1x67fvhsgd3zg83adld04revuurxiqpwfyqinn6nyoc5gwifqsh66joyi4da2oez3aygbeni2s1lgrmzzuu2ntfvtcl0k9gv0znd61zacuvnr5uzey4snguvp5mxd4q64usjwbpxbzm4igfzyd84hz1y7x6wsyki72vdicxw649tnk0829a09sklj56sl8elty0gm6bagb6eawl7d7bc67cecy9bkpbxz7bjl0k5h76wd3ksjw5ijjneb0xohcez8hx6aumtqa2lg15t2jdfynhv0b2m26521o1d6c7uuk9xs4vmfecaj2hhwg32y6l91syo9p4sm66pxktnqn2rkrgrudtgw8mewuainxm9dsns873np3ex1mlugo0fful1x6hnho7urh9h6cm6lvy9w97wsaue3y2lxexzi5ijrcbhzm46zeuwbeai78zh3wxp4i84sl0dibxbftx13e50za56euoa4wxxybgnx13f76q1n0xgwfze0n5wf9n124jbw0uatw2xjkcszdgskqqsuxvu8wy7ardub58kmt3r7nk44ib73rv8983uvhz2d1y8dw4gn8bpidiorl9q9cuafag3h3wkrmqef3ovngd6oyq0fxr8jj7c4acul5bf8t9prwdgh3bp8hjltizp33f11nlj7fckggiuc5jtb9lr8l27jg5jq02jb7oimjf4s32f9us1nj7hfbqje4fpsq8xzo5pvngfo8hh8q3cbmr52wqtngzvt49p6hijq887df7ijxyzqnlgrlcs4k8jjpdecm82yiqbl7eul9n02d1v81ptnv32ammui90k8gsh64lygwzrjmews43jk07duz7hiwgvfa76zsv3sr4ca53vr295u5ffaal50ju9tjns0ca948loie47tqs4ctezwei5t1dh38ge7ql84rzgtrvlgjeuim2jqpahw2mbrx47iqmms31qe3ebc9tvtr1re281uk5rvit5wa8kk58ic6z9 00:11:15.802 14:15:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo y2vjzjhkzx3m1tsce2po9z4o1x67fvhsgd3zg83adld04revuurxiqpwfyqinn6nyoc5gwifqsh66joyi4da2oez3aygbeni2s1lgrmzzuu2ntfvtcl0k9gv0znd61zacuvnr5uzey4snguvp5mxd4q64usjwbpxbzm4igfzyd84hz1y7x6wsyki72vdicxw649tnk0829a09sklj56sl8elty0gm6bagb6eawl7d7bc67cecy9bkpbxz7bjl0k5h76wd3ksjw5ijjneb0xohcez8hx6aumtqa2lg15t2jdfynhv0b2m26521o1d6c7uuk9xs4vmfecaj2hhwg32y6l91syo9p4sm66pxktnqn2rkrgrudtgw8mewuainxm9dsns873np3ex1mlugo0fful1x6hnho7urh9h6cm6lvy9w97wsaue3y2lxexzi5ijrcbhzm46zeuwbeai78zh3wxp4i84sl0dibxbftx13e50za56euoa4wxxybgnx13f76q1n0xgwfze0n5wf9n124jbw0uatw2xjkcszdgskqqsuxvu8wy7ardub58kmt3r7nk44ib73rv8983uvhz2d1y8dw4gn8bpidiorl9q9cuafag3h3wkrmqef3ovngd6oyq0fxr8jj7c4acul5bf8t9prwdgh3bp8hjltizp33f11nlj7fckggiuc5jtb9lr8l27jg5jq02jb7oimjf4s32f9us1nj7hfbqje4fpsq8xzo5pvngfo8hh8q3cbmr52wqtngzvt49p6hijq887df7ijxyzqnlgrlcs4k8jjpdecm82yiqbl7eul9n02d1v81ptnv32ammui90k8gsh64lygwzrjmews43jk07duz7hiwgvfa76zsv3sr4ca53vr295u5ffaal50ju9tjns0ca948loie47tqs4ctezwei5t1dh38ge7ql84rzgtrvlgjeuim2jqpahw2mbrx47iqmms31qe3ebc9tvtr1re281uk5rvit5wa8kk58ic6z9 00:11:15.802 14:15:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:11:15.802 [2024-11-06 14:15:43.291559] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:11:15.802 [2024-11-06 14:15:43.291944] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63819 ] 00:11:16.061 [2024-11-06 14:15:43.480376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:16.061 [2024-11-06 14:15:43.613885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:16.320 [2024-11-06 14:15:43.839910] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:17.695  [2024-11-06T14:15:47.863Z] Copying: 511/511 [MB] (average 1077 MBps) 00:11:20.228 00:11:20.228 14:15:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:11:20.228 14:15:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:11:20.228 14:15:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:11:20.228 14:15:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:11:20.228 { 00:11:20.228 "subsystems": [ 00:11:20.228 { 00:11:20.228 "subsystem": "bdev", 00:11:20.228 "config": [ 00:11:20.228 { 00:11:20.228 "params": { 00:11:20.228 "block_size": 512, 00:11:20.228 "num_blocks": 1048576, 00:11:20.228 "name": "malloc0" 00:11:20.228 }, 00:11:20.228 "method": "bdev_malloc_create" 00:11:20.228 }, 00:11:20.228 { 00:11:20.228 "params": { 00:11:20.228 "filename": "/dev/zram1", 00:11:20.228 "name": "uring0" 00:11:20.229 }, 00:11:20.229 "method": "bdev_uring_create" 00:11:20.229 }, 00:11:20.229 { 00:11:20.229 "method": "bdev_wait_for_examine" 00:11:20.229 } 00:11:20.229 ] 00:11:20.229 } 00:11:20.229 ] 00:11:20.229 } 00:11:20.229 [2024-11-06 14:15:47.757020] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:11:20.229 [2024-11-06 14:15:47.757166] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63869 ] 00:11:20.487 [2024-11-06 14:15:47.945256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:20.487 [2024-11-06 14:15:48.073713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.747 [2024-11-06 14:15:48.292730] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:22.652  [2024-11-06T14:15:51.224Z] Copying: 225/512 [MB] (225 MBps) [2024-11-06T14:15:51.482Z] Copying: 447/512 [MB] (222 MBps) [2024-11-06T14:15:54.019Z] Copying: 512/512 [MB] (average 223 MBps) 00:11:26.384 00:11:26.384 14:15:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:11:26.384 14:15:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:11:26.384 14:15:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:11:26.384 14:15:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:11:26.384 { 00:11:26.384 "subsystems": [ 00:11:26.384 { 00:11:26.384 "subsystem": "bdev", 00:11:26.384 "config": [ 00:11:26.384 { 00:11:26.384 "params": { 00:11:26.384 "block_size": 512, 00:11:26.384 "num_blocks": 1048576, 00:11:26.384 "name": "malloc0" 00:11:26.384 }, 00:11:26.384 "method": "bdev_malloc_create" 00:11:26.384 }, 00:11:26.384 { 00:11:26.384 "params": { 00:11:26.384 "filename": "/dev/zram1", 00:11:26.384 "name": "uring0" 00:11:26.384 }, 00:11:26.384 "method": "bdev_uring_create" 00:11:26.384 }, 00:11:26.384 { 00:11:26.384 "method": "bdev_wait_for_examine" 00:11:26.384 } 00:11:26.384 ] 00:11:26.384 } 00:11:26.384 ] 00:11:26.384 } 00:11:26.384 [2024-11-06 14:15:53.974145] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:11:26.384 [2024-11-06 14:15:53.974299] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63947 ] 00:11:26.643 [2024-11-06 14:15:54.160489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:26.902 [2024-11-06 14:15:54.278010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.902 [2024-11-06 14:15:54.483911] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:28.828  [2024-11-06T14:15:57.398Z] Copying: 179/512 [MB] (179 MBps) [2024-11-06T14:15:58.332Z] Copying: 352/512 [MB] (173 MBps) [2024-11-06T14:15:58.332Z] Copying: 508/512 [MB] (156 MBps) [2024-11-06T14:16:00.911Z] Copying: 512/512 [MB] (average 169 MBps) 00:11:33.276 00:11:33.276 14:16:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:11:33.276 14:16:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ y2vjzjhkzx3m1tsce2po9z4o1x67fvhsgd3zg83adld04revuurxiqpwfyqinn6nyoc5gwifqsh66joyi4da2oez3aygbeni2s1lgrmzzuu2ntfvtcl0k9gv0znd61zacuvnr5uzey4snguvp5mxd4q64usjwbpxbzm4igfzyd84hz1y7x6wsyki72vdicxw649tnk0829a09sklj56sl8elty0gm6bagb6eawl7d7bc67cecy9bkpbxz7bjl0k5h76wd3ksjw5ijjneb0xohcez8hx6aumtqa2lg15t2jdfynhv0b2m26521o1d6c7uuk9xs4vmfecaj2hhwg32y6l91syo9p4sm66pxktnqn2rkrgrudtgw8mewuainxm9dsns873np3ex1mlugo0fful1x6hnho7urh9h6cm6lvy9w97wsaue3y2lxexzi5ijrcbhzm46zeuwbeai78zh3wxp4i84sl0dibxbftx13e50za56euoa4wxxybgnx13f76q1n0xgwfze0n5wf9n124jbw0uatw2xjkcszdgskqqsuxvu8wy7ardub58kmt3r7nk44ib73rv8983uvhz2d1y8dw4gn8bpidiorl9q9cuafag3h3wkrmqef3ovngd6oyq0fxr8jj7c4acul5bf8t9prwdgh3bp8hjltizp33f11nlj7fckggiuc5jtb9lr8l27jg5jq02jb7oimjf4s32f9us1nj7hfbqje4fpsq8xzo5pvngfo8hh8q3cbmr52wqtngzvt49p6hijq887df7ijxyzqnlgrlcs4k8jjpdecm82yiqbl7eul9n02d1v81ptnv32ammui90k8gsh64lygwzrjmews43jk07duz7hiwgvfa76zsv3sr4ca53vr295u5ffaal50ju9tjns0ca948loie47tqs4ctezwei5t1dh38ge7ql84rzgtrvlgjeuim2jqpahw2mbrx47iqmms31qe3ebc9tvtr1re281uk5rvit5wa8kk58ic6z9 == \y\2\v\j\z\j\h\k\z\x\3\m\1\t\s\c\e\2\p\o\9\z\4\o\1\x\6\7\f\v\h\s\g\d\3\z\g\8\3\a\d\l\d\0\4\r\e\v\u\u\r\x\i\q\p\w\f\y\q\i\n\n\6\n\y\o\c\5\g\w\i\f\q\s\h\6\6\j\o\y\i\4\d\a\2\o\e\z\3\a\y\g\b\e\n\i\2\s\1\l\g\r\m\z\z\u\u\2\n\t\f\v\t\c\l\0\k\9\g\v\0\z\n\d\6\1\z\a\c\u\v\n\r\5\u\z\e\y\4\s\n\g\u\v\p\5\m\x\d\4\q\6\4\u\s\j\w\b\p\x\b\z\m\4\i\g\f\z\y\d\8\4\h\z\1\y\7\x\6\w\s\y\k\i\7\2\v\d\i\c\x\w\6\4\9\t\n\k\0\8\2\9\a\0\9\s\k\l\j\5\6\s\l\8\e\l\t\y\0\g\m\6\b\a\g\b\6\e\a\w\l\7\d\7\b\c\6\7\c\e\c\y\9\b\k\p\b\x\z\7\b\j\l\0\k\5\h\7\6\w\d\3\k\s\j\w\5\i\j\j\n\e\b\0\x\o\h\c\e\z\8\h\x\6\a\u\m\t\q\a\2\l\g\1\5\t\2\j\d\f\y\n\h\v\0\b\2\m\2\6\5\2\1\o\1\d\6\c\7\u\u\k\9\x\s\4\v\m\f\e\c\a\j\2\h\h\w\g\3\2\y\6\l\9\1\s\y\o\9\p\4\s\m\6\6\p\x\k\t\n\q\n\2\r\k\r\g\r\u\d\t\g\w\8\m\e\w\u\a\i\n\x\m\9\d\s\n\s\8\7\3\n\p\3\e\x\1\m\l\u\g\o\0\f\f\u\l\1\x\6\h\n\h\o\7\u\r\h\9\h\6\c\m\6\l\v\y\9\w\9\7\w\s\a\u\e\3\y\2\l\x\e\x\z\i\5\i\j\r\c\b\h\z\m\4\6\z\e\u\w\b\e\a\i\7\8\z\h\3\w\x\p\4\i\8\4\s\l\0\d\i\b\x\b\f\t\x\1\3\e\5\0\z\a\5\6\e\u\o\a\4\w\x\x\y\b\g\n\x\1\3\f\7\6\q\1\n\0\x\g\w\f\z\e\0\n\5\w\f\9\n\1\2\4\j\b\w\0\u\a\t\w\2\x\j\k\c\s\z\d\g\s\k\q\q\s\u\x\v\u\8\w\y\7\a\r\d\u\b\5\8\k\m\t\3\r\7\n\k\4\4\i\b\7\3\r\v\8\9\8\3\u\v\h\z\2\d\1\y\8\d\w\4\g\n\8\b\p\i\d\i\o\r\l\9\q\9\c\u\a\f\a\g\3\h\3\w\k\r\m\q\e\f\3\o\v\n\g\d\6\o\y\q\0\f\x\r\8\j\j\7\c\4\a\c\u\l\5\b\f\8\t\9\p\r\w\d\g\h\3\b\p\8\h\j\l\t\i\z\p\3\3\f\1\1\n\l\j\7\f\c\k\g\g\i\u\c\5\j\t\b\9\l\r\8\l\2\7\j\g\5\j\q\0\2\j\b\7\o\i\m\j\f\4\s\3\2\f\9\u\s\1\n\j\7\h\f\b\q\j\e\4\f\p\s\q\8\x\z\o\5\p\v\n\g\f\o\8\h\h\8\q\3\c\b\m\r\5\2\w\q\t\n\g\z\v\t\4\9\p\6\h\i\j\q\8\8\7\d\f\7\i\j\x\y\z\q\n\l\g\r\l\c\s\4\k\8\j\j\p\d\e\c\m\8\2\y\i\q\b\l\7\e\u\l\9\n\0\2\d\1\v\8\1\p\t\n\v\3\2\a\m\m\u\i\9\0\k\8\g\s\h\6\4\l\y\g\w\z\r\j\m\e\w\s\4\3\j\k\0\7\d\u\z\7\h\i\w\g\v\f\a\7\6\z\s\v\3\s\r\4\c\a\5\3\v\r\2\9\5\u\5\f\f\a\a\l\5\0\j\u\9\t\j\n\s\0\c\a\9\4\8\l\o\i\e\4\7\t\q\s\4\c\t\e\z\w\e\i\5\t\1\d\h\3\8\g\e\7\q\l\8\4\r\z\g\t\r\v\l\g\j\e\u\i\m\2\j\q\p\a\h\w\2\m\b\r\x\4\7\i\q\m\m\s\3\1\q\e\3\e\b\c\9\t\v\t\r\1\r\e\2\8\1\u\k\5\r\v\i\t\5\w\a\8\k\k\5\8\i\c\6\z\9 ]] 00:11:33.276 14:16:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:11:33.277 14:16:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ y2vjzjhkzx3m1tsce2po9z4o1x67fvhsgd3zg83adld04revuurxiqpwfyqinn6nyoc5gwifqsh66joyi4da2oez3aygbeni2s1lgrmzzuu2ntfvtcl0k9gv0znd61zacuvnr5uzey4snguvp5mxd4q64usjwbpxbzm4igfzyd84hz1y7x6wsyki72vdicxw649tnk0829a09sklj56sl8elty0gm6bagb6eawl7d7bc67cecy9bkpbxz7bjl0k5h76wd3ksjw5ijjneb0xohcez8hx6aumtqa2lg15t2jdfynhv0b2m26521o1d6c7uuk9xs4vmfecaj2hhwg32y6l91syo9p4sm66pxktnqn2rkrgrudtgw8mewuainxm9dsns873np3ex1mlugo0fful1x6hnho7urh9h6cm6lvy9w97wsaue3y2lxexzi5ijrcbhzm46zeuwbeai78zh3wxp4i84sl0dibxbftx13e50za56euoa4wxxybgnx13f76q1n0xgwfze0n5wf9n124jbw0uatw2xjkcszdgskqqsuxvu8wy7ardub58kmt3r7nk44ib73rv8983uvhz2d1y8dw4gn8bpidiorl9q9cuafag3h3wkrmqef3ovngd6oyq0fxr8jj7c4acul5bf8t9prwdgh3bp8hjltizp33f11nlj7fckggiuc5jtb9lr8l27jg5jq02jb7oimjf4s32f9us1nj7hfbqje4fpsq8xzo5pvngfo8hh8q3cbmr52wqtngzvt49p6hijq887df7ijxyzqnlgrlcs4k8jjpdecm82yiqbl7eul9n02d1v81ptnv32ammui90k8gsh64lygwzrjmews43jk07duz7hiwgvfa76zsv3sr4ca53vr295u5ffaal50ju9tjns0ca948loie47tqs4ctezwei5t1dh38ge7ql84rzgtrvlgjeuim2jqpahw2mbrx47iqmms31qe3ebc9tvtr1re281uk5rvit5wa8kk58ic6z9 == \y\2\v\j\z\j\h\k\z\x\3\m\1\t\s\c\e\2\p\o\9\z\4\o\1\x\6\7\f\v\h\s\g\d\3\z\g\8\3\a\d\l\d\0\4\r\e\v\u\u\r\x\i\q\p\w\f\y\q\i\n\n\6\n\y\o\c\5\g\w\i\f\q\s\h\6\6\j\o\y\i\4\d\a\2\o\e\z\3\a\y\g\b\e\n\i\2\s\1\l\g\r\m\z\z\u\u\2\n\t\f\v\t\c\l\0\k\9\g\v\0\z\n\d\6\1\z\a\c\u\v\n\r\5\u\z\e\y\4\s\n\g\u\v\p\5\m\x\d\4\q\6\4\u\s\j\w\b\p\x\b\z\m\4\i\g\f\z\y\d\8\4\h\z\1\y\7\x\6\w\s\y\k\i\7\2\v\d\i\c\x\w\6\4\9\t\n\k\0\8\2\9\a\0\9\s\k\l\j\5\6\s\l\8\e\l\t\y\0\g\m\6\b\a\g\b\6\e\a\w\l\7\d\7\b\c\6\7\c\e\c\y\9\b\k\p\b\x\z\7\b\j\l\0\k\5\h\7\6\w\d\3\k\s\j\w\5\i\j\j\n\e\b\0\x\o\h\c\e\z\8\h\x\6\a\u\m\t\q\a\2\l\g\1\5\t\2\j\d\f\y\n\h\v\0\b\2\m\2\6\5\2\1\o\1\d\6\c\7\u\u\k\9\x\s\4\v\m\f\e\c\a\j\2\h\h\w\g\3\2\y\6\l\9\1\s\y\o\9\p\4\s\m\6\6\p\x\k\t\n\q\n\2\r\k\r\g\r\u\d\t\g\w\8\m\e\w\u\a\i\n\x\m\9\d\s\n\s\8\7\3\n\p\3\e\x\1\m\l\u\g\o\0\f\f\u\l\1\x\6\h\n\h\o\7\u\r\h\9\h\6\c\m\6\l\v\y\9\w\9\7\w\s\a\u\e\3\y\2\l\x\e\x\z\i\5\i\j\r\c\b\h\z\m\4\6\z\e\u\w\b\e\a\i\7\8\z\h\3\w\x\p\4\i\8\4\s\l\0\d\i\b\x\b\f\t\x\1\3\e\5\0\z\a\5\6\e\u\o\a\4\w\x\x\y\b\g\n\x\1\3\f\7\6\q\1\n\0\x\g\w\f\z\e\0\n\5\w\f\9\n\1\2\4\j\b\w\0\u\a\t\w\2\x\j\k\c\s\z\d\g\s\k\q\q\s\u\x\v\u\8\w\y\7\a\r\d\u\b\5\8\k\m\t\3\r\7\n\k\4\4\i\b\7\3\r\v\8\9\8\3\u\v\h\z\2\d\1\y\8\d\w\4\g\n\8\b\p\i\d\i\o\r\l\9\q\9\c\u\a\f\a\g\3\h\3\w\k\r\m\q\e\f\3\o\v\n\g\d\6\o\y\q\0\f\x\r\8\j\j\7\c\4\a\c\u\l\5\b\f\8\t\9\p\r\w\d\g\h\3\b\p\8\h\j\l\t\i\z\p\3\3\f\1\1\n\l\j\7\f\c\k\g\g\i\u\c\5\j\t\b\9\l\r\8\l\2\7\j\g\5\j\q\0\2\j\b\7\o\i\m\j\f\4\s\3\2\f\9\u\s\1\n\j\7\h\f\b\q\j\e\4\f\p\s\q\8\x\z\o\5\p\v\n\g\f\o\8\h\h\8\q\3\c\b\m\r\5\2\w\q\t\n\g\z\v\t\4\9\p\6\h\i\j\q\8\8\7\d\f\7\i\j\x\y\z\q\n\l\g\r\l\c\s\4\k\8\j\j\p\d\e\c\m\8\2\y\i\q\b\l\7\e\u\l\9\n\0\2\d\1\v\8\1\p\t\n\v\3\2\a\m\m\u\i\9\0\k\8\g\s\h\6\4\l\y\g\w\z\r\j\m\e\w\s\4\3\j\k\0\7\d\u\z\7\h\i\w\g\v\f\a\7\6\z\s\v\3\s\r\4\c\a\5\3\v\r\2\9\5\u\5\f\f\a\a\l\5\0\j\u\9\t\j\n\s\0\c\a\9\4\8\l\o\i\e\4\7\t\q\s\4\c\t\e\z\w\e\i\5\t\1\d\h\3\8\g\e\7\q\l\8\4\r\z\g\t\r\v\l\g\j\e\u\i\m\2\j\q\p\a\h\w\2\m\b\r\x\4\7\i\q\m\m\s\3\1\q\e\3\e\b\c\9\t\v\t\r\1\r\e\2\8\1\u\k\5\r\v\i\t\5\w\a\8\k\k\5\8\i\c\6\z\9 ]] 00:11:33.277 14:16:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:11:33.842 14:16:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:11:33.842 14:16:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:11:33.842 14:16:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:11:33.843 14:16:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:11:33.843 { 00:11:33.843 "subsystems": [ 00:11:33.843 { 00:11:33.843 "subsystem": "bdev", 00:11:33.843 "config": [ 00:11:33.843 { 00:11:33.843 "params": { 00:11:33.843 "block_size": 512, 00:11:33.843 "num_blocks": 1048576, 00:11:33.843 "name": "malloc0" 00:11:33.843 }, 00:11:33.843 "method": "bdev_malloc_create" 00:11:33.843 }, 00:11:33.843 { 00:11:33.843 "params": { 00:11:33.843 "filename": "/dev/zram1", 00:11:33.843 "name": "uring0" 00:11:33.843 }, 00:11:33.843 "method": "bdev_uring_create" 00:11:33.843 }, 00:11:33.843 { 00:11:33.843 "method": "bdev_wait_for_examine" 00:11:33.843 } 00:11:33.843 ] 00:11:33.843 } 00:11:33.843 ] 00:11:33.843 } 00:11:33.843 [2024-11-06 14:16:01.390596] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:11:33.843 [2024-11-06 14:16:01.390735] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64084 ] 00:11:34.100 [2024-11-06 14:16:01.577127] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:34.100 [2024-11-06 14:16:01.699102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:34.373 [2024-11-06 14:16:01.915350] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:36.273  [2024-11-06T14:16:04.839Z] Copying: 171/512 [MB] (171 MBps) [2024-11-06T14:16:05.772Z] Copying: 342/512 [MB] (170 MBps) [2024-11-06T14:16:05.772Z] Copying: 506/512 [MB] (164 MBps) [2024-11-06T14:16:08.300Z] Copying: 512/512 [MB] (average 169 MBps) 00:11:40.665 00:11:40.923 14:16:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:11:40.923 14:16:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:11:40.923 14:16:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:11:40.923 14:16:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:11:40.923 14:16:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:11:40.923 14:16:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:11:40.923 14:16:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:11:40.923 14:16:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:11:40.923 { 00:11:40.923 "subsystems": [ 00:11:40.923 { 00:11:40.923 "subsystem": "bdev", 00:11:40.923 "config": [ 00:11:40.923 { 00:11:40.923 "params": { 00:11:40.923 "block_size": 512, 00:11:40.923 "num_blocks": 1048576, 00:11:40.923 "name": "malloc0" 00:11:40.923 }, 00:11:40.923 "method": "bdev_malloc_create" 00:11:40.923 }, 00:11:40.923 { 00:11:40.923 "params": { 00:11:40.923 "filename": "/dev/zram1", 00:11:40.923 "name": "uring0" 00:11:40.923 }, 00:11:40.923 "method": "bdev_uring_create" 00:11:40.923 }, 00:11:40.923 { 00:11:40.923 "params": { 00:11:40.923 "name": "uring0" 00:11:40.923 }, 00:11:40.923 "method": "bdev_uring_delete" 00:11:40.923 }, 00:11:40.923 { 00:11:40.923 "method": "bdev_wait_for_examine" 00:11:40.923 } 00:11:40.923 ] 00:11:40.923 } 00:11:40.923 ] 00:11:40.923 } 00:11:40.923 [2024-11-06 14:16:08.429707] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:11:40.923 [2024-11-06 14:16:08.429866] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64169 ] 00:11:41.181 [2024-11-06 14:16:08.619084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:41.181 [2024-11-06 14:16:08.750523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:41.438 [2024-11-06 14:16:08.990033] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:42.372  [2024-11-06T14:16:12.539Z] Copying: 0/0 [B] (average 0 Bps) 00:11:44.904 00:11:44.904 14:16:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:11:44.904 14:16:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:11:44.904 14:16:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:11:44.904 14:16:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@650 -- # local es=0 00:11:44.904 14:16:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:11:44.904 14:16:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:11:44.904 14:16:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:11:44.904 14:16:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:44.904 14:16:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:44.904 14:16:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:44.904 14:16:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:44.904 14:16:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:44.904 14:16:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:44.904 14:16:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:44.904 14:16:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:44.904 14:16:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:11:44.904 { 00:11:44.904 "subsystems": [ 00:11:44.904 { 00:11:44.904 "subsystem": "bdev", 00:11:44.904 "config": [ 00:11:44.904 { 00:11:44.904 "params": { 00:11:44.904 "block_size": 512, 00:11:44.905 "num_blocks": 1048576, 00:11:44.905 "name": "malloc0" 00:11:44.905 }, 00:11:44.905 "method": "bdev_malloc_create" 00:11:44.905 }, 00:11:44.905 { 00:11:44.905 "params": { 00:11:44.905 "filename": "/dev/zram1", 00:11:44.905 "name": "uring0" 00:11:44.905 }, 00:11:44.905 "method": "bdev_uring_create" 00:11:44.905 }, 00:11:44.905 { 00:11:44.905 "params": { 00:11:44.905 "name": "uring0" 00:11:44.905 }, 00:11:44.905 "method": "bdev_uring_delete" 00:11:44.905 }, 00:11:44.905 { 00:11:44.905 "method": "bdev_wait_for_examine" 00:11:44.905 } 00:11:44.905 ] 00:11:44.905 } 00:11:44.905 ] 00:11:44.905 } 00:11:45.163 [2024-11-06 14:16:12.549296] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:11:45.163 [2024-11-06 14:16:12.549443] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64232 ] 00:11:45.163 [2024-11-06 14:16:12.725762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:45.422 [2024-11-06 14:16:12.852197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:45.681 [2024-11-06 14:16:13.073705] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:46.248 [2024-11-06 14:16:13.811641] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:11:46.248 [2024-11-06 14:16:13.811713] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:11:46.248 [2024-11-06 14:16:13.811733] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:11:46.248 [2024-11-06 14:16:13.811754] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:48.780 [2024-11-06 14:16:16.138600] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:11:48.780 14:16:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@653 -- # es=237 00:11:48.780 14:16:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:48.780 14:16:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@662 -- # es=109 00:11:48.780 14:16:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@663 -- # case "$es" in 00:11:48.780 14:16:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@670 -- # es=1 00:11:48.780 14:16:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:48.780 14:16:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:11:48.780 14:16:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:11:48.780 14:16:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:11:48.780 14:16:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:11:49.042 14:16:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:11:49.042 14:16:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:11:49.042 ************************************ 00:11:49.042 END TEST dd_uring_copy 00:11:49.042 ************************************ 00:11:49.042 00:11:49.042 real 0m33.483s 00:11:49.042 user 0m27.708s 00:11:49.042 sys 0m15.790s 00:11:49.042 14:16:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:49.042 14:16:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:11:49.301 00:11:49.301 real 0m33.838s 00:11:49.301 user 0m27.896s 00:11:49.301 sys 0m15.962s 00:11:49.301 14:16:16 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:49.301 ************************************ 00:11:49.301 END TEST spdk_dd_uring 00:11:49.301 14:16:16 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:11:49.301 ************************************ 00:11:49.301 14:16:16 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:11:49.301 14:16:16 spdk_dd -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:49.301 14:16:16 spdk_dd -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:49.301 14:16:16 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:11:49.301 ************************************ 00:11:49.301 START TEST spdk_dd_sparse 00:11:49.301 ************************************ 00:11:49.301 14:16:16 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:11:49.301 * Looking for test storage... 00:11:49.301 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:11:49.301 14:16:16 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:49.301 14:16:16 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1691 -- # lcov --version 00:11:49.301 14:16:16 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:49.562 14:16:16 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:49.562 14:16:16 spdk_dd.spdk_dd_sparse -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:49.562 14:16:16 spdk_dd.spdk_dd_sparse -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:49.562 14:16:16 spdk_dd.spdk_dd_sparse -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:49.562 14:16:16 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # IFS=.-: 00:11:49.562 14:16:16 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # read -ra ver1 00:11:49.562 14:16:16 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # IFS=.-: 00:11:49.562 14:16:16 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # read -ra ver2 00:11:49.562 14:16:16 spdk_dd.spdk_dd_sparse -- scripts/common.sh@338 -- # local 'op=<' 00:11:49.562 14:16:16 spdk_dd.spdk_dd_sparse -- scripts/common.sh@340 -- # ver1_l=2 00:11:49.562 14:16:16 spdk_dd.spdk_dd_sparse -- scripts/common.sh@341 -- # ver2_l=1 00:11:49.562 14:16:16 spdk_dd.spdk_dd_sparse -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:49.562 14:16:16 spdk_dd.spdk_dd_sparse -- scripts/common.sh@344 -- # case "$op" in 00:11:49.562 14:16:16 spdk_dd.spdk_dd_sparse -- scripts/common.sh@345 -- # : 1 00:11:49.562 14:16:16 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:49.562 14:16:16 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:49.562 14:16:16 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # decimal 1 00:11:49.562 14:16:16 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=1 00:11:49.562 14:16:16 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:49.562 14:16:16 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 1 00:11:49.562 14:16:16 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # ver1[v]=1 00:11:49.562 14:16:16 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # decimal 2 00:11:49.562 14:16:16 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=2 00:11:49.562 14:16:16 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:49.562 14:16:16 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 2 00:11:49.562 14:16:16 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # ver2[v]=2 00:11:49.562 14:16:16 spdk_dd.spdk_dd_sparse -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:49.562 14:16:16 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:49.562 14:16:16 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # return 0 00:11:49.562 14:16:16 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:49.562 14:16:16 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:49.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:49.562 --rc genhtml_branch_coverage=1 00:11:49.562 --rc genhtml_function_coverage=1 00:11:49.562 --rc genhtml_legend=1 00:11:49.562 --rc geninfo_all_blocks=1 00:11:49.562 --rc geninfo_unexecuted_blocks=1 00:11:49.562 00:11:49.562 ' 00:11:49.562 14:16:16 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:49.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:49.562 --rc genhtml_branch_coverage=1 00:11:49.562 --rc genhtml_function_coverage=1 00:11:49.562 --rc genhtml_legend=1 00:11:49.562 --rc geninfo_all_blocks=1 00:11:49.562 --rc geninfo_unexecuted_blocks=1 00:11:49.562 00:11:49.562 ' 00:11:49.562 14:16:16 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:49.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:49.562 --rc genhtml_branch_coverage=1 00:11:49.562 --rc genhtml_function_coverage=1 00:11:49.562 --rc genhtml_legend=1 00:11:49.562 --rc geninfo_all_blocks=1 00:11:49.562 --rc geninfo_unexecuted_blocks=1 00:11:49.562 00:11:49.562 ' 00:11:49.562 14:16:16 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:49.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:49.562 --rc genhtml_branch_coverage=1 00:11:49.562 --rc genhtml_function_coverage=1 00:11:49.562 --rc genhtml_legend=1 00:11:49.562 --rc geninfo_all_blocks=1 00:11:49.562 --rc geninfo_unexecuted_blocks=1 00:11:49.562 00:11:49.562 ' 00:11:49.562 14:16:16 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:49.562 14:16:16 spdk_dd.spdk_dd_sparse -- scripts/common.sh@15 -- # shopt -s extglob 00:11:49.562 14:16:17 spdk_dd.spdk_dd_sparse -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:49.562 14:16:17 spdk_dd.spdk_dd_sparse -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:49.562 14:16:17 spdk_dd.spdk_dd_sparse -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:49.562 14:16:17 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.562 14:16:17 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.562 14:16:17 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.562 14:16:17 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:11:49.562 14:16:17 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.562 14:16:17 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:11:49.562 14:16:17 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:11:49.562 14:16:17 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:11:49.562 14:16:17 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:11:49.562 14:16:17 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:11:49.562 14:16:17 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:11:49.562 14:16:17 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:11:49.562 14:16:17 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:11:49.562 14:16:17 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:11:49.562 14:16:17 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:11:49.562 14:16:17 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:11:49.562 1+0 records in 00:11:49.562 1+0 records out 00:11:49.562 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0103211 s, 406 MB/s 00:11:49.562 14:16:17 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:11:49.562 1+0 records in 00:11:49.562 1+0 records out 00:11:49.562 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.011136 s, 377 MB/s 00:11:49.562 14:16:17 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:11:49.562 1+0 records in 00:11:49.562 1+0 records out 00:11:49.562 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0105571 s, 397 MB/s 00:11:49.562 14:16:17 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:11:49.562 14:16:17 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:49.562 14:16:17 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:49.563 14:16:17 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:11:49.563 ************************************ 00:11:49.563 START TEST dd_sparse_file_to_file 00:11:49.563 ************************************ 00:11:49.563 14:16:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1127 -- # file_to_file 00:11:49.563 14:16:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:11:49.563 14:16:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:11:49.563 14:16:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:11:49.563 14:16:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:11:49.563 14:16:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:11:49.563 14:16:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:11:49.563 14:16:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:11:49.563 14:16:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:11:49.563 14:16:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:11:49.563 14:16:17 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:11:49.563 { 00:11:49.563 "subsystems": [ 00:11:49.563 { 00:11:49.563 "subsystem": "bdev", 00:11:49.563 "config": [ 00:11:49.563 { 00:11:49.563 "params": { 00:11:49.563 "block_size": 4096, 00:11:49.563 "filename": "dd_sparse_aio_disk", 00:11:49.563 "name": "dd_aio" 00:11:49.563 }, 00:11:49.563 "method": "bdev_aio_create" 00:11:49.563 }, 00:11:49.563 { 00:11:49.563 "params": { 00:11:49.563 "lvs_name": "dd_lvstore", 00:11:49.563 "bdev_name": "dd_aio" 00:11:49.563 }, 00:11:49.563 "method": "bdev_lvol_create_lvstore" 00:11:49.563 }, 00:11:49.563 { 00:11:49.563 "method": "bdev_wait_for_examine" 00:11:49.563 } 00:11:49.563 ] 00:11:49.563 } 00:11:49.563 ] 00:11:49.563 } 00:11:49.563 [2024-11-06 14:16:17.192265] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:11:49.563 [2024-11-06 14:16:17.192407] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64366 ] 00:11:49.822 [2024-11-06 14:16:17.379338] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:50.081 [2024-11-06 14:16:17.504324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:50.081 [2024-11-06 14:16:17.713326] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:50.340  [2024-11-06T14:16:19.350Z] Copying: 12/36 [MB] (average 444 MBps) 00:11:51.715 00:11:51.715 14:16:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:11:51.715 14:16:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:11:51.715 14:16:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:11:51.715 14:16:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:11:51.715 14:16:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:11:51.715 14:16:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:11:51.715 14:16:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:11:51.715 14:16:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:11:51.715 ************************************ 00:11:51.715 END TEST dd_sparse_file_to_file 00:11:51.715 ************************************ 00:11:51.715 14:16:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:11:51.715 14:16:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:11:51.715 00:11:51.715 real 0m2.138s 00:11:51.715 user 0m1.721s 00:11:51.715 sys 0m1.262s 00:11:51.715 14:16:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:51.715 14:16:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:11:51.715 14:16:19 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:11:51.715 14:16:19 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:51.715 14:16:19 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:51.715 14:16:19 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:11:51.715 ************************************ 00:11:51.715 START TEST dd_sparse_file_to_bdev 00:11:51.715 ************************************ 00:11:51.715 14:16:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1127 -- # file_to_bdev 00:11:51.715 14:16:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:11:51.715 14:16:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:11:51.715 14:16:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:11:51.715 14:16:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:11:51.715 14:16:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:11:51.715 14:16:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:11:51.715 14:16:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:11:51.715 14:16:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:11:51.974 { 00:11:51.974 "subsystems": [ 00:11:51.974 { 00:11:51.974 "subsystem": "bdev", 00:11:51.974 "config": [ 00:11:51.974 { 00:11:51.974 "params": { 00:11:51.974 "block_size": 4096, 00:11:51.974 "filename": "dd_sparse_aio_disk", 00:11:51.974 "name": "dd_aio" 00:11:51.974 }, 00:11:51.974 "method": "bdev_aio_create" 00:11:51.974 }, 00:11:51.974 { 00:11:51.974 "params": { 00:11:51.974 "lvs_name": "dd_lvstore", 00:11:51.974 "lvol_name": "dd_lvol", 00:11:51.974 "size_in_mib": 36, 00:11:51.974 "thin_provision": true 00:11:51.974 }, 00:11:51.974 "method": "bdev_lvol_create" 00:11:51.974 }, 00:11:51.974 { 00:11:51.974 "method": "bdev_wait_for_examine" 00:11:51.974 } 00:11:51.974 ] 00:11:51.974 } 00:11:51.974 ] 00:11:51.974 } 00:11:51.974 [2024-11-06 14:16:19.408687] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:11:51.974 [2024-11-06 14:16:19.409060] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64426 ] 00:11:52.232 [2024-11-06 14:16:19.649691] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:52.233 [2024-11-06 14:16:19.772549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:52.491 [2024-11-06 14:16:19.985899] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:52.750  [2024-11-06T14:16:21.784Z] Copying: 12/36 [MB] (average 461 MBps) 00:11:54.149 00:11:54.149 00:11:54.149 real 0m2.124s 00:11:54.149 user 0m1.757s 00:11:54.149 sys 0m1.212s 00:11:54.149 ************************************ 00:11:54.149 END TEST dd_sparse_file_to_bdev 00:11:54.149 ************************************ 00:11:54.150 14:16:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:54.150 14:16:21 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:11:54.150 14:16:21 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:11:54.150 14:16:21 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:54.150 14:16:21 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:54.150 14:16:21 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:11:54.150 ************************************ 00:11:54.150 START TEST dd_sparse_bdev_to_file 00:11:54.150 ************************************ 00:11:54.150 14:16:21 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1127 -- # bdev_to_file 00:11:54.150 14:16:21 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:11:54.150 14:16:21 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:11:54.150 14:16:21 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:11:54.150 14:16:21 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:11:54.150 14:16:21 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:11:54.150 14:16:21 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:11:54.150 14:16:21 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:11:54.150 14:16:21 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:11:54.150 { 00:11:54.150 "subsystems": [ 00:11:54.150 { 00:11:54.150 "subsystem": "bdev", 00:11:54.150 "config": [ 00:11:54.150 { 00:11:54.150 "params": { 00:11:54.150 "block_size": 4096, 00:11:54.150 "filename": "dd_sparse_aio_disk", 00:11:54.150 "name": "dd_aio" 00:11:54.150 }, 00:11:54.150 "method": "bdev_aio_create" 00:11:54.150 }, 00:11:54.150 { 00:11:54.150 "method": "bdev_wait_for_examine" 00:11:54.150 } 00:11:54.150 ] 00:11:54.150 } 00:11:54.150 ] 00:11:54.150 } 00:11:54.150 [2024-11-06 14:16:21.602064] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:11:54.150 [2024-11-06 14:16:21.602210] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64476 ] 00:11:54.409 [2024-11-06 14:16:21.789950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:54.409 [2024-11-06 14:16:21.915163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:54.668 [2024-11-06 14:16:22.131079] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:54.927  [2024-11-06T14:16:23.939Z] Copying: 12/36 [MB] (average 1000 MBps) 00:11:56.304 00:11:56.304 14:16:23 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:11:56.304 14:16:23 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:11:56.304 14:16:23 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:11:56.304 14:16:23 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:11:56.304 14:16:23 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:11:56.304 14:16:23 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:11:56.304 14:16:23 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:11:56.304 14:16:23 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:11:56.304 ************************************ 00:11:56.304 END TEST dd_sparse_bdev_to_file 00:11:56.304 ************************************ 00:11:56.304 14:16:23 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:11:56.304 14:16:23 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:11:56.304 00:11:56.304 real 0m2.082s 00:11:56.304 user 0m1.704s 00:11:56.304 sys 0m1.207s 00:11:56.304 14:16:23 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:56.304 14:16:23 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:11:56.304 14:16:23 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:11:56.304 14:16:23 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:11:56.304 14:16:23 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:11:56.304 14:16:23 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:11:56.304 14:16:23 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:11:56.304 ************************************ 00:11:56.304 END TEST spdk_dd_sparse 00:11:56.304 ************************************ 00:11:56.304 00:11:56.304 real 0m6.906s 00:11:56.304 user 0m5.407s 00:11:56.304 sys 0m4.024s 00:11:56.304 14:16:23 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:56.304 14:16:23 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:11:56.304 14:16:23 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:11:56.304 14:16:23 spdk_dd -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:56.304 14:16:23 spdk_dd -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:56.304 14:16:23 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:11:56.304 ************************************ 00:11:56.304 START TEST spdk_dd_negative 00:11:56.304 ************************************ 00:11:56.304 14:16:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:11:56.304 * Looking for test storage... 00:11:56.304 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:11:56.304 14:16:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:56.304 14:16:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1691 -- # lcov --version 00:11:56.304 14:16:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:56.564 14:16:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:56.564 14:16:23 spdk_dd.spdk_dd_negative -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:56.564 14:16:23 spdk_dd.spdk_dd_negative -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:56.564 14:16:23 spdk_dd.spdk_dd_negative -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:56.564 14:16:23 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # IFS=.-: 00:11:56.564 14:16:23 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # read -ra ver1 00:11:56.564 14:16:23 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # IFS=.-: 00:11:56.564 14:16:23 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # read -ra ver2 00:11:56.564 14:16:23 spdk_dd.spdk_dd_negative -- scripts/common.sh@338 -- # local 'op=<' 00:11:56.564 14:16:23 spdk_dd.spdk_dd_negative -- scripts/common.sh@340 -- # ver1_l=2 00:11:56.564 14:16:23 spdk_dd.spdk_dd_negative -- scripts/common.sh@341 -- # ver2_l=1 00:11:56.564 14:16:23 spdk_dd.spdk_dd_negative -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:56.564 14:16:23 spdk_dd.spdk_dd_negative -- scripts/common.sh@344 -- # case "$op" in 00:11:56.564 14:16:23 spdk_dd.spdk_dd_negative -- scripts/common.sh@345 -- # : 1 00:11:56.564 14:16:23 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:56.564 14:16:23 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:56.564 14:16:23 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # decimal 1 00:11:56.564 14:16:23 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=1 00:11:56.564 14:16:23 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:56.564 14:16:23 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 1 00:11:56.564 14:16:23 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # ver1[v]=1 00:11:56.564 14:16:23 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # decimal 2 00:11:56.564 14:16:23 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=2 00:11:56.564 14:16:23 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:56.564 14:16:23 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 2 00:11:56.564 14:16:23 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # ver2[v]=2 00:11:56.564 14:16:23 spdk_dd.spdk_dd_negative -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:56.564 14:16:23 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:56.564 14:16:23 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # return 0 00:11:56.564 14:16:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:56.564 14:16:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:56.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.564 --rc genhtml_branch_coverage=1 00:11:56.564 --rc genhtml_function_coverage=1 00:11:56.564 --rc genhtml_legend=1 00:11:56.564 --rc geninfo_all_blocks=1 00:11:56.564 --rc geninfo_unexecuted_blocks=1 00:11:56.564 00:11:56.564 ' 00:11:56.564 14:16:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:56.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.564 --rc genhtml_branch_coverage=1 00:11:56.564 --rc genhtml_function_coverage=1 00:11:56.564 --rc genhtml_legend=1 00:11:56.564 --rc geninfo_all_blocks=1 00:11:56.564 --rc geninfo_unexecuted_blocks=1 00:11:56.564 00:11:56.564 ' 00:11:56.564 14:16:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:56.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.564 --rc genhtml_branch_coverage=1 00:11:56.564 --rc genhtml_function_coverage=1 00:11:56.564 --rc genhtml_legend=1 00:11:56.564 --rc geninfo_all_blocks=1 00:11:56.564 --rc geninfo_unexecuted_blocks=1 00:11:56.564 00:11:56.564 ' 00:11:56.564 14:16:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:56.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.564 --rc genhtml_branch_coverage=1 00:11:56.564 --rc genhtml_function_coverage=1 00:11:56.564 --rc genhtml_legend=1 00:11:56.564 --rc geninfo_all_blocks=1 00:11:56.564 --rc geninfo_unexecuted_blocks=1 00:11:56.564 00:11:56.564 ' 00:11:56.564 14:16:23 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:56.564 14:16:23 spdk_dd.spdk_dd_negative -- scripts/common.sh@15 -- # shopt -s extglob 00:11:56.564 14:16:24 spdk_dd.spdk_dd_negative -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:56.564 14:16:24 spdk_dd.spdk_dd_negative -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:56.564 14:16:24 spdk_dd.spdk_dd_negative -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:56.564 14:16:24 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.564 14:16:24 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.564 14:16:24 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.564 14:16:24 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:11:56.564 14:16:24 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.564 14:16:24 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@210 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:56.564 14:16:24 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@211 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:56.564 14:16:24 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@213 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:56.564 14:16:24 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@214 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:56.564 14:16:24 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@216 -- # run_test dd_invalid_arguments invalid_arguments 00:11:56.564 14:16:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:56.564 14:16:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:56.564 14:16:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:56.564 ************************************ 00:11:56.564 START TEST dd_invalid_arguments 00:11:56.564 ************************************ 00:11:56.564 14:16:24 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1127 -- # invalid_arguments 00:11:56.564 14:16:24 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:11:56.564 14:16:24 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@650 -- # local es=0 00:11:56.564 14:16:24 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:11:56.564 14:16:24 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:56.564 14:16:24 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:56.564 14:16:24 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:56.564 14:16:24 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:56.564 14:16:24 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:56.564 14:16:24 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:56.564 14:16:24 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:56.564 14:16:24 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:56.564 14:16:24 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:11:56.564 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:11:56.564 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:11:56.564 00:11:56.564 CPU options: 00:11:56.564 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:11:56.564 (like [0,1,10]) 00:11:56.564 --lcores lcore to CPU mapping list. The list is in the format: 00:11:56.564 [<,lcores[@CPUs]>...] 00:11:56.564 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:11:56.564 Within the group, '-' is used for range separator, 00:11:56.564 ',' is used for single number separator. 00:11:56.564 '( )' can be omitted for single element group, 00:11:56.564 '@' can be omitted if cpus and lcores have the same value 00:11:56.565 --disable-cpumask-locks Disable CPU core lock files. 00:11:56.565 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:11:56.565 pollers in the app support interrupt mode) 00:11:56.565 -p, --main-core main (primary) core for DPDK 00:11:56.565 00:11:56.565 Configuration options: 00:11:56.565 -c, --config, --json JSON config file 00:11:56.565 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:11:56.565 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:11:56.565 --wait-for-rpc wait for RPCs to initialize subsystems 00:11:56.565 --rpcs-allowed comma-separated list of permitted RPCS 00:11:56.565 --json-ignore-init-errors don't exit on invalid config entry 00:11:56.565 00:11:56.565 Memory options: 00:11:56.565 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:11:56.565 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:11:56.565 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:11:56.565 -R, --huge-unlink unlink huge files after initialization 00:11:56.565 -n, --mem-channels number of memory channels used for DPDK 00:11:56.565 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:11:56.565 --msg-mempool-size global message memory pool size in count (default: 262143) 00:11:56.565 --no-huge run without using hugepages 00:11:56.565 --enforce-numa enforce NUMA allocations from the specified NUMA node 00:11:56.565 -i, --shm-id shared memory ID (optional) 00:11:56.565 -g, --single-file-segments force creating just one hugetlbfs file 00:11:56.565 00:11:56.565 PCI options: 00:11:56.565 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:11:56.565 -B, --pci-blocked pci addr to block (can be used more than once) 00:11:56.565 -u, --no-pci disable PCI access 00:11:56.565 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:11:56.565 00:11:56.565 Log options: 00:11:56.565 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:11:56.565 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:11:56.565 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:11:56.565 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:11:56.565 blobfs_rw, fsdev, fsdev_aio, ftl_core, ftl_init, fuse_dispatcher, 00:11:56.565 gpt_parse, idxd, ioat, iscsi_init, json_util, keyring, log_rpc, lvol, 00:11:56.565 lvol_rpc, notify_rpc, nvme, nvme_auth, nvme_cuse, nvme_vfio, opal, 00:11:56.565 reactor, rpc, rpc_client, scsi, sock, sock_posix, spdk_aio_mgr_io, 00:11:56.565 thread, trace, uring, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, 00:11:56.565 vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, vfu, 00:11:56.565 vfu_virtio, vfu_virtio_blk, vfu_virtio_fs, vfu_virtio_fs_data, 00:11:56.565 vfu_virtio_io, vfu_virtio_scsi, vfu_virtio_scsi_data, virtio, 00:11:56.565 virtio_blk, virtio_dev, virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:11:56.565 --silence-noticelog disable notice level logging to stderr 00:11:56.565 00:11:56.565 Trace options: 00:11:56.565 --num-trace-entries number of trace entries for each core, must be power of 2, 00:11:56.565 [2024-11-06 14:16:24.145916] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:11:56.823 setting 0 to disable trace (default 32768) 00:11:56.823 Tracepoints vary in size and can use more than one trace entry. 00:11:56.823 -e, --tpoint-group [:] 00:11:56.823 group_name - tracepoint group name for spdk trace buffers (scsi, bdev, 00:11:56.823 ftl, blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, 00:11:56.823 blob, bdev_raid, scheduler, all). 00:11:56.823 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:11:56.823 a tracepoint group. First tpoint inside a group can be enabled by 00:11:56.823 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:11:56.823 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:11:56.823 in /include/spdk_internal/trace_defs.h 00:11:56.823 00:11:56.823 Other options: 00:11:56.823 -h, --help show this usage 00:11:56.823 -v, --version print SPDK version 00:11:56.823 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:11:56.823 --env-context Opaque context for use of the env implementation 00:11:56.823 00:11:56.823 Application specific: 00:11:56.823 [--------- DD Options ---------] 00:11:56.823 --if Input file. Must specify either --if or --ib. 00:11:56.823 --ib Input bdev. Must specifier either --if or --ib 00:11:56.823 --of Output file. Must specify either --of or --ob. 00:11:56.823 --ob Output bdev. Must specify either --of or --ob. 00:11:56.823 --iflag Input file flags. 00:11:56.823 --oflag Output file flags. 00:11:56.823 --bs I/O unit size (default: 4096) 00:11:56.823 --qd Queue depth (default: 2) 00:11:56.823 --count I/O unit count. The number of I/O units to copy. (default: all) 00:11:56.823 --skip Skip this many I/O units at start of input. (default: 0) 00:11:56.823 --seek Skip this many I/O units at start of output. (default: 0) 00:11:56.823 --aio Force usage of AIO. (by default io_uring is used if available) 00:11:56.823 --sparse Enable hole skipping in input target 00:11:56.823 Available iflag and oflag values: 00:11:56.823 append - append mode 00:11:56.823 direct - use direct I/O for data 00:11:56.823 directory - fail unless a directory 00:11:56.823 dsync - use synchronized I/O for data 00:11:56.823 noatime - do not update access time 00:11:56.823 noctty - do not assign controlling terminal from file 00:11:56.823 nofollow - do not follow symlinks 00:11:56.823 nonblock - use non-blocking I/O 00:11:56.824 sync - use synchronized I/O for data and metadata 00:11:56.824 ************************************ 00:11:56.824 END TEST dd_invalid_arguments 00:11:56.824 ************************************ 00:11:56.824 14:16:24 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@653 -- # es=2 00:11:56.824 14:16:24 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:56.824 14:16:24 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:56.824 14:16:24 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:56.824 00:11:56.824 real 0m0.184s 00:11:56.824 user 0m0.085s 00:11:56.824 sys 0m0.095s 00:11:56.824 14:16:24 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:56.824 14:16:24 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:11:56.824 14:16:24 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@217 -- # run_test dd_double_input double_input 00:11:56.824 14:16:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:56.824 14:16:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:56.824 14:16:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:56.824 ************************************ 00:11:56.824 START TEST dd_double_input 00:11:56.824 ************************************ 00:11:56.824 14:16:24 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1127 -- # double_input 00:11:56.824 14:16:24 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:11:56.824 14:16:24 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@650 -- # local es=0 00:11:56.824 14:16:24 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:11:56.824 14:16:24 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:56.824 14:16:24 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:56.824 14:16:24 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:56.824 14:16:24 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:56.824 14:16:24 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:56.824 14:16:24 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:56.824 14:16:24 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:56.824 14:16:24 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:56.824 14:16:24 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:11:56.824 [2024-11-06 14:16:24.405674] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:11:57.083 14:16:24 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@653 -- # es=22 00:11:57.083 14:16:24 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:57.083 14:16:24 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:57.083 14:16:24 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:57.083 00:11:57.083 real 0m0.198s 00:11:57.083 user 0m0.089s 00:11:57.083 sys 0m0.105s 00:11:57.083 14:16:24 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:57.083 ************************************ 00:11:57.083 END TEST dd_double_input 00:11:57.083 ************************************ 00:11:57.083 14:16:24 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:11:57.083 14:16:24 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@218 -- # run_test dd_double_output double_output 00:11:57.083 14:16:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:57.083 14:16:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:57.083 14:16:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:57.083 ************************************ 00:11:57.083 START TEST dd_double_output 00:11:57.083 ************************************ 00:11:57.083 14:16:24 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1127 -- # double_output 00:11:57.083 14:16:24 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:11:57.083 14:16:24 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@650 -- # local es=0 00:11:57.083 14:16:24 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:11:57.083 14:16:24 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:57.083 14:16:24 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:57.083 14:16:24 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:57.083 14:16:24 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:57.083 14:16:24 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:57.083 14:16:24 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:57.083 14:16:24 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:57.083 14:16:24 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:57.083 14:16:24 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:11:57.083 [2024-11-06 14:16:24.678078] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:11:57.342 14:16:24 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@653 -- # es=22 00:11:57.342 14:16:24 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:57.342 14:16:24 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:57.342 14:16:24 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:57.342 00:11:57.342 real 0m0.197s 00:11:57.342 user 0m0.094s 00:11:57.342 sys 0m0.098s 00:11:57.342 14:16:24 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:57.342 14:16:24 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:11:57.342 ************************************ 00:11:57.342 END TEST dd_double_output 00:11:57.342 ************************************ 00:11:57.342 14:16:24 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@219 -- # run_test dd_no_input no_input 00:11:57.342 14:16:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:57.342 14:16:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:57.342 14:16:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:57.342 ************************************ 00:11:57.342 START TEST dd_no_input 00:11:57.342 ************************************ 00:11:57.342 14:16:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1127 -- # no_input 00:11:57.342 14:16:24 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:11:57.342 14:16:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@650 -- # local es=0 00:11:57.342 14:16:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:11:57.342 14:16:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:57.342 14:16:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:57.342 14:16:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:57.342 14:16:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:57.342 14:16:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:57.342 14:16:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:57.342 14:16:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:57.342 14:16:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:57.342 14:16:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:11:57.342 [2024-11-06 14:16:24.955864] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:11:57.601 14:16:25 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@653 -- # es=22 00:11:57.601 14:16:25 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:57.601 14:16:25 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:57.601 14:16:25 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:57.601 00:11:57.601 real 0m0.201s 00:11:57.601 user 0m0.089s 00:11:57.601 sys 0m0.108s 00:11:57.601 14:16:25 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:57.601 14:16:25 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:11:57.601 ************************************ 00:11:57.601 END TEST dd_no_input 00:11:57.601 ************************************ 00:11:57.601 14:16:25 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@220 -- # run_test dd_no_output no_output 00:11:57.601 14:16:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:57.601 14:16:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:57.601 14:16:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:57.601 ************************************ 00:11:57.601 START TEST dd_no_output 00:11:57.601 ************************************ 00:11:57.601 14:16:25 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1127 -- # no_output 00:11:57.601 14:16:25 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:57.601 14:16:25 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@650 -- # local es=0 00:11:57.601 14:16:25 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:57.601 14:16:25 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:57.601 14:16:25 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:57.601 14:16:25 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:57.601 14:16:25 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:57.601 14:16:25 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:57.601 14:16:25 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:57.601 14:16:25 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:57.601 14:16:25 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:57.602 14:16:25 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:57.602 [2024-11-06 14:16:25.211095] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:11:57.860 14:16:25 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@653 -- # es=22 00:11:57.860 14:16:25 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:57.860 14:16:25 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:57.860 14:16:25 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:57.860 00:11:57.860 real 0m0.185s 00:11:57.860 user 0m0.084s 00:11:57.860 sys 0m0.098s 00:11:57.860 14:16:25 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:57.860 14:16:25 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:11:57.860 ************************************ 00:11:57.860 END TEST dd_no_output 00:11:57.860 ************************************ 00:11:57.860 14:16:25 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@221 -- # run_test dd_wrong_blocksize wrong_blocksize 00:11:57.860 14:16:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:57.860 14:16:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:57.860 14:16:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:57.860 ************************************ 00:11:57.860 START TEST dd_wrong_blocksize 00:11:57.860 ************************************ 00:11:57.860 14:16:25 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1127 -- # wrong_blocksize 00:11:57.861 14:16:25 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:11:57.861 14:16:25 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@650 -- # local es=0 00:11:57.861 14:16:25 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:11:57.861 14:16:25 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:57.861 14:16:25 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:57.861 14:16:25 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:57.861 14:16:25 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:57.861 14:16:25 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:57.861 14:16:25 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:57.861 14:16:25 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:57.861 14:16:25 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:57.861 14:16:25 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:11:57.861 [2024-11-06 14:16:25.481720] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:11:58.119 14:16:25 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@653 -- # es=22 00:11:58.119 14:16:25 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:58.119 ************************************ 00:11:58.119 END TEST dd_wrong_blocksize 00:11:58.119 ************************************ 00:11:58.119 14:16:25 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:58.119 14:16:25 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:58.119 00:11:58.119 real 0m0.199s 00:11:58.119 user 0m0.088s 00:11:58.119 sys 0m0.107s 00:11:58.119 14:16:25 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:58.119 14:16:25 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:11:58.119 14:16:25 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@222 -- # run_test dd_smaller_blocksize smaller_blocksize 00:11:58.119 14:16:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:58.119 14:16:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:58.119 14:16:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:11:58.119 ************************************ 00:11:58.119 START TEST dd_smaller_blocksize 00:11:58.119 ************************************ 00:11:58.119 14:16:25 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1127 -- # smaller_blocksize 00:11:58.119 14:16:25 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:11:58.119 14:16:25 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@650 -- # local es=0 00:11:58.119 14:16:25 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:11:58.119 14:16:25 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:58.119 14:16:25 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:58.120 14:16:25 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:58.120 14:16:25 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:58.120 14:16:25 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:58.120 14:16:25 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:58.120 14:16:25 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:58.120 14:16:25 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:58.120 14:16:25 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:11:58.379 [2024-11-06 14:16:25.753214] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:11:58.379 [2024-11-06 14:16:25.753370] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64737 ] 00:11:58.379 [2024-11-06 14:16:25.941087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:58.639 [2024-11-06 14:16:26.076391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:58.898 [2024-11-06 14:16:26.299778] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:59.465 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:11:59.722 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:11:59.722 [2024-11-06 14:16:27.350804] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:11:59.722 [2024-11-06 14:16:27.350935] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:00.656 [2024-11-06 14:16:28.208404] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:12:00.914 14:16:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@653 -- # es=244 00:12:00.914 14:16:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:00.914 14:16:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@662 -- # es=116 00:12:00.914 14:16:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # case "$es" in 00:12:00.914 14:16:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@670 -- # es=1 00:12:00.914 14:16:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:00.914 ************************************ 00:12:00.914 END TEST dd_smaller_blocksize 00:12:00.914 ************************************ 00:12:00.914 00:12:00.914 real 0m2.856s 00:12:00.914 user 0m1.779s 00:12:00.914 sys 0m0.958s 00:12:00.914 14:16:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:00.914 14:16:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:12:01.173 14:16:28 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@223 -- # run_test dd_invalid_count invalid_count 00:12:01.173 14:16:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:01.173 14:16:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:01.173 14:16:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:01.173 ************************************ 00:12:01.173 START TEST dd_invalid_count 00:12:01.173 ************************************ 00:12:01.173 14:16:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1127 -- # invalid_count 00:12:01.173 14:16:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:12:01.173 14:16:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@650 -- # local es=0 00:12:01.173 14:16:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:12:01.173 14:16:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:01.173 14:16:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:01.173 14:16:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:01.173 14:16:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:01.173 14:16:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:01.173 14:16:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:01.173 14:16:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:01.173 14:16:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:01.173 14:16:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:12:01.173 [2024-11-06 14:16:28.683268] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:12:01.173 ************************************ 00:12:01.173 END TEST dd_invalid_count 00:12:01.174 ************************************ 00:12:01.174 14:16:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@653 -- # es=22 00:12:01.174 14:16:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:01.174 14:16:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:01.174 14:16:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:01.174 00:12:01.174 real 0m0.185s 00:12:01.174 user 0m0.091s 00:12:01.174 sys 0m0.092s 00:12:01.174 14:16:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:01.174 14:16:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:12:01.434 14:16:28 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@224 -- # run_test dd_invalid_oflag invalid_oflag 00:12:01.434 14:16:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:01.434 14:16:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:01.434 14:16:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:01.434 ************************************ 00:12:01.434 START TEST dd_invalid_oflag 00:12:01.435 ************************************ 00:12:01.435 14:16:28 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1127 -- # invalid_oflag 00:12:01.435 14:16:28 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:12:01.435 14:16:28 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@650 -- # local es=0 00:12:01.435 14:16:28 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:12:01.435 14:16:28 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:01.435 14:16:28 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:01.435 14:16:28 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:01.435 14:16:28 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:01.435 14:16:28 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:01.435 14:16:28 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:01.435 14:16:28 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:01.435 14:16:28 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:01.435 14:16:28 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:12:01.435 [2024-11-06 14:16:28.938755] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:12:01.435 14:16:29 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@653 -- # es=22 00:12:01.435 14:16:29 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:01.435 14:16:29 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:01.435 14:16:29 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:01.435 00:12:01.435 real 0m0.190s 00:12:01.435 user 0m0.080s 00:12:01.435 sys 0m0.108s 00:12:01.435 14:16:29 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:01.435 ************************************ 00:12:01.435 END TEST dd_invalid_oflag 00:12:01.435 ************************************ 00:12:01.435 14:16:29 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:12:01.694 14:16:29 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@225 -- # run_test dd_invalid_iflag invalid_iflag 00:12:01.694 14:16:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:01.694 14:16:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:01.694 14:16:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:01.694 ************************************ 00:12:01.694 START TEST dd_invalid_iflag 00:12:01.694 ************************************ 00:12:01.694 14:16:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1127 -- # invalid_iflag 00:12:01.694 14:16:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:12:01.694 14:16:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@650 -- # local es=0 00:12:01.694 14:16:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:12:01.694 14:16:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:01.694 14:16:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:01.694 14:16:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:01.695 14:16:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:01.695 14:16:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:01.695 14:16:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:01.695 14:16:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:01.695 14:16:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:01.695 14:16:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:12:01.695 [2024-11-06 14:16:29.204081] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:12:01.695 ************************************ 00:12:01.695 END TEST dd_invalid_iflag 00:12:01.695 ************************************ 00:12:01.695 14:16:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@653 -- # es=22 00:12:01.695 14:16:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:01.695 14:16:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:01.695 14:16:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:01.695 00:12:01.695 real 0m0.179s 00:12:01.695 user 0m0.089s 00:12:01.695 sys 0m0.087s 00:12:01.695 14:16:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:01.695 14:16:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:12:01.695 14:16:29 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@226 -- # run_test dd_unknown_flag unknown_flag 00:12:01.695 14:16:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:01.695 14:16:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:01.695 14:16:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:01.953 ************************************ 00:12:01.953 START TEST dd_unknown_flag 00:12:01.953 ************************************ 00:12:01.953 14:16:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1127 -- # unknown_flag 00:12:01.953 14:16:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:12:01.953 14:16:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@650 -- # local es=0 00:12:01.953 14:16:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:12:01.953 14:16:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:01.953 14:16:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:01.953 14:16:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:01.953 14:16:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:01.953 14:16:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:01.953 14:16:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:01.953 14:16:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:01.953 14:16:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:01.953 14:16:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:12:01.953 [2024-11-06 14:16:29.452298] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:12:01.953 [2024-11-06 14:16:29.452648] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64861 ] 00:12:02.210 [2024-11-06 14:16:29.635617] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:02.210 [2024-11-06 14:16:29.755371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:02.468 [2024-11-06 14:16:29.972748] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:02.468 [2024-11-06 14:16:30.093905] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:12:02.468 [2024-11-06 14:16:30.093996] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:02.468 [2024-11-06 14:16:30.094070] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:12:02.468 [2024-11-06 14:16:30.094090] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:02.468 [2024-11-06 14:16:30.094361] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:12:02.468 [2024-11-06 14:16:30.094389] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:02.468 [2024-11-06 14:16:30.094480] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:12:02.468 [2024-11-06 14:16:30.094505] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:12:03.405 [2024-11-06 14:16:30.934843] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:12:03.664 14:16:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@653 -- # es=234 00:12:03.664 14:16:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:03.664 14:16:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@662 -- # es=106 00:12:03.664 14:16:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # case "$es" in 00:12:03.664 14:16:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@670 -- # es=1 00:12:03.664 14:16:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:03.664 00:12:03.664 real 0m1.904s 00:12:03.664 user 0m1.543s 00:12:03.664 sys 0m0.253s 00:12:03.664 ************************************ 00:12:03.664 END TEST dd_unknown_flag 00:12:03.664 ************************************ 00:12:03.664 14:16:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:03.664 14:16:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:12:03.923 14:16:31 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@227 -- # run_test dd_invalid_json invalid_json 00:12:03.923 14:16:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:03.923 14:16:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:03.923 14:16:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:03.923 ************************************ 00:12:03.923 START TEST dd_invalid_json 00:12:03.923 ************************************ 00:12:03.923 14:16:31 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1127 -- # invalid_json 00:12:03.923 14:16:31 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:12:03.924 14:16:31 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # : 00:12:03.924 14:16:31 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@650 -- # local es=0 00:12:03.924 14:16:31 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:12:03.924 14:16:31 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:03.924 14:16:31 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:03.924 14:16:31 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:03.924 14:16:31 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:03.924 14:16:31 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:03.924 14:16:31 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:03.924 14:16:31 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:03.924 14:16:31 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:03.924 14:16:31 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:12:03.924 [2024-11-06 14:16:31.432582] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:12:03.924 [2024-11-06 14:16:31.432727] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64901 ] 00:12:04.183 [2024-11-06 14:16:31.618970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:04.183 [2024-11-06 14:16:31.753924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:04.183 [2024-11-06 14:16:31.754014] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:12:04.183 [2024-11-06 14:16:31.754035] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:12:04.183 [2024-11-06 14:16:31.754051] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:04.183 [2024-11-06 14:16:31.754110] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:12:04.442 14:16:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@653 -- # es=234 00:12:04.442 14:16:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:04.442 14:16:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@662 -- # es=106 00:12:04.442 ************************************ 00:12:04.442 END TEST dd_invalid_json 00:12:04.442 ************************************ 00:12:04.442 14:16:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # case "$es" in 00:12:04.442 14:16:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@670 -- # es=1 00:12:04.442 14:16:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:04.442 00:12:04.442 real 0m0.724s 00:12:04.442 user 0m0.454s 00:12:04.443 sys 0m0.167s 00:12:04.443 14:16:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:04.443 14:16:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:12:04.702 14:16:32 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@228 -- # run_test dd_invalid_seek invalid_seek 00:12:04.702 14:16:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:04.702 14:16:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:04.702 14:16:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:04.702 ************************************ 00:12:04.702 START TEST dd_invalid_seek 00:12:04.702 ************************************ 00:12:04.702 14:16:32 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1127 -- # invalid_seek 00:12:04.702 14:16:32 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@102 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:12:04.702 14:16:32 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:12:04.702 14:16:32 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # local -A method_bdev_malloc_create_0 00:12:04.702 14:16:32 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@108 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:12:04.702 14:16:32 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:12:04.702 14:16:32 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # local -A method_bdev_malloc_create_1 00:12:04.702 14:16:32 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:12:04.702 14:16:32 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # gen_conf 00:12:04.702 14:16:32 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@650 -- # local es=0 00:12:04.702 14:16:32 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:12:04.702 14:16:32 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:04.702 14:16:32 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/common.sh@31 -- # xtrace_disable 00:12:04.702 14:16:32 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:12:04.702 14:16:32 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:04.702 14:16:32 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:04.702 14:16:32 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:04.702 14:16:32 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:04.702 14:16:32 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:04.702 14:16:32 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:04.702 14:16:32 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:04.702 14:16:32 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:12:04.702 { 00:12:04.702 "subsystems": [ 00:12:04.702 { 00:12:04.702 "subsystem": "bdev", 00:12:04.702 "config": [ 00:12:04.702 { 00:12:04.702 "params": { 00:12:04.702 "block_size": 512, 00:12:04.702 "num_blocks": 512, 00:12:04.702 "name": "malloc0" 00:12:04.702 }, 00:12:04.702 "method": "bdev_malloc_create" 00:12:04.702 }, 00:12:04.702 { 00:12:04.702 "params": { 00:12:04.702 "block_size": 512, 00:12:04.702 "num_blocks": 512, 00:12:04.702 "name": "malloc1" 00:12:04.702 }, 00:12:04.702 "method": "bdev_malloc_create" 00:12:04.702 }, 00:12:04.702 { 00:12:04.702 "method": "bdev_wait_for_examine" 00:12:04.702 } 00:12:04.702 ] 00:12:04.702 } 00:12:04.702 ] 00:12:04.702 } 00:12:04.702 [2024-11-06 14:16:32.232892] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:12:04.702 [2024-11-06 14:16:32.233035] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64937 ] 00:12:04.961 [2024-11-06 14:16:32.417986] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:04.961 [2024-11-06 14:16:32.567746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:05.220 [2024-11-06 14:16:32.803643] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:05.480 [2024-11-06 14:16:32.967239] spdk_dd.c:1145:dd_run: *ERROR*: --seek value too big (513) - only 512 blocks available in output 00:12:05.480 [2024-11-06 14:16:32.967314] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:06.415 [2024-11-06 14:16:33.907862] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:12:06.679 14:16:34 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@653 -- # es=228 00:12:06.679 14:16:34 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:06.679 14:16:34 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@662 -- # es=100 00:12:06.679 14:16:34 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@663 -- # case "$es" in 00:12:06.679 14:16:34 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@670 -- # es=1 00:12:06.679 14:16:34 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:06.679 00:12:06.679 real 0m2.087s 00:12:06.679 user 0m1.711s 00:12:06.679 sys 0m0.335s 00:12:06.679 14:16:34 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:06.679 14:16:34 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:12:06.679 ************************************ 00:12:06.679 END TEST dd_invalid_seek 00:12:06.679 ************************************ 00:12:06.679 14:16:34 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@229 -- # run_test dd_invalid_skip invalid_skip 00:12:06.679 14:16:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:06.679 14:16:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:06.679 14:16:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:06.679 ************************************ 00:12:06.679 START TEST dd_invalid_skip 00:12:06.679 ************************************ 00:12:06.679 14:16:34 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1127 -- # invalid_skip 00:12:06.679 14:16:34 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@125 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:12:06.679 14:16:34 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:12:06.679 14:16:34 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # local -A method_bdev_malloc_create_0 00:12:06.679 14:16:34 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@131 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:12:06.679 14:16:34 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:12:06.679 14:16:34 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # local -A method_bdev_malloc_create_1 00:12:06.679 14:16:34 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:12:06.679 14:16:34 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # gen_conf 00:12:06.679 14:16:34 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/common.sh@31 -- # xtrace_disable 00:12:06.679 14:16:34 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@650 -- # local es=0 00:12:06.679 14:16:34 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:12:06.679 14:16:34 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:06.679 14:16:34 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:12:06.679 14:16:34 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:06.679 14:16:34 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:06.679 14:16:34 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:06.679 14:16:34 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:06.679 14:16:34 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:06.679 14:16:34 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:06.679 14:16:34 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:06.679 14:16:34 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:12:06.958 { 00:12:06.958 "subsystems": [ 00:12:06.958 { 00:12:06.958 "subsystem": "bdev", 00:12:06.958 "config": [ 00:12:06.958 { 00:12:06.958 "params": { 00:12:06.958 "block_size": 512, 00:12:06.958 "num_blocks": 512, 00:12:06.958 "name": "malloc0" 00:12:06.958 }, 00:12:06.958 "method": "bdev_malloc_create" 00:12:06.958 }, 00:12:06.958 { 00:12:06.958 "params": { 00:12:06.958 "block_size": 512, 00:12:06.958 "num_blocks": 512, 00:12:06.958 "name": "malloc1" 00:12:06.958 }, 00:12:06.958 "method": "bdev_malloc_create" 00:12:06.958 }, 00:12:06.958 { 00:12:06.958 "method": "bdev_wait_for_examine" 00:12:06.958 } 00:12:06.958 ] 00:12:06.958 } 00:12:06.958 ] 00:12:06.958 } 00:12:06.958 [2024-11-06 14:16:34.392240] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:12:06.958 [2024-11-06 14:16:34.392370] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64988 ] 00:12:06.958 [2024-11-06 14:16:34.576193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:07.217 [2024-11-06 14:16:34.721373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:07.477 [2024-11-06 14:16:34.964521] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:07.736 [2024-11-06 14:16:35.129000] spdk_dd.c:1102:dd_run: *ERROR*: --skip value too big (513) - only 512 blocks available in input 00:12:07.736 [2024-11-06 14:16:35.129087] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:08.671 [2024-11-06 14:16:36.057568] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:12:08.930 14:16:36 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@653 -- # es=228 00:12:08.930 14:16:36 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:08.930 14:16:36 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@662 -- # es=100 00:12:08.930 14:16:36 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@663 -- # case "$es" in 00:12:08.930 14:16:36 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@670 -- # es=1 00:12:08.930 14:16:36 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:08.930 00:12:08.931 real 0m2.073s 00:12:08.931 user 0m1.707s 00:12:08.931 sys 0m0.321s 00:12:08.931 14:16:36 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:08.931 ************************************ 00:12:08.931 END TEST dd_invalid_skip 00:12:08.931 ************************************ 00:12:08.931 14:16:36 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:12:08.931 14:16:36 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@230 -- # run_test dd_invalid_input_count invalid_input_count 00:12:08.931 14:16:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:08.931 14:16:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:08.931 14:16:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:08.931 ************************************ 00:12:08.931 START TEST dd_invalid_input_count 00:12:08.931 ************************************ 00:12:08.931 14:16:36 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1127 -- # invalid_input_count 00:12:08.931 14:16:36 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@149 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:12:08.931 14:16:36 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:12:08.931 14:16:36 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # local -A method_bdev_malloc_create_0 00:12:08.931 14:16:36 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@155 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:12:08.931 14:16:36 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:12:08.931 14:16:36 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # local -A method_bdev_malloc_create_1 00:12:08.931 14:16:36 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:12:08.931 14:16:36 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # gen_conf 00:12:08.931 14:16:36 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@650 -- # local es=0 00:12:08.931 14:16:36 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:12:08.931 14:16:36 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/common.sh@31 -- # xtrace_disable 00:12:08.931 14:16:36 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:08.931 14:16:36 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:12:08.931 14:16:36 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:08.931 14:16:36 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:08.931 14:16:36 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:08.931 14:16:36 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:08.931 14:16:36 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:08.931 14:16:36 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:08.931 14:16:36 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:08.931 14:16:36 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:12:08.931 { 00:12:08.931 "subsystems": [ 00:12:08.931 { 00:12:08.931 "subsystem": "bdev", 00:12:08.931 "config": [ 00:12:08.931 { 00:12:08.931 "params": { 00:12:08.931 "block_size": 512, 00:12:08.931 "num_blocks": 512, 00:12:08.931 "name": "malloc0" 00:12:08.931 }, 00:12:08.931 "method": "bdev_malloc_create" 00:12:08.931 }, 00:12:08.931 { 00:12:08.931 "params": { 00:12:08.931 "block_size": 512, 00:12:08.931 "num_blocks": 512, 00:12:08.931 "name": "malloc1" 00:12:08.931 }, 00:12:08.931 "method": "bdev_malloc_create" 00:12:08.931 }, 00:12:08.931 { 00:12:08.931 "method": "bdev_wait_for_examine" 00:12:08.931 } 00:12:08.931 ] 00:12:08.931 } 00:12:08.931 ] 00:12:08.931 } 00:12:08.931 [2024-11-06 14:16:36.541988] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:12:08.931 [2024-11-06 14:16:36.542130] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65039 ] 00:12:09.191 [2024-11-06 14:16:36.728023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:09.452 [2024-11-06 14:16:36.882992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:09.711 [2024-11-06 14:16:37.135459] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:09.711 [2024-11-06 14:16:37.299771] spdk_dd.c:1110:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available from input 00:12:09.711 [2024-11-06 14:16:37.299866] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:10.648 [2024-11-06 14:16:38.224218] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:12:10.907 14:16:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@653 -- # es=228 00:12:10.907 14:16:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:10.907 14:16:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@662 -- # es=100 00:12:10.907 14:16:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@663 -- # case "$es" in 00:12:10.907 14:16:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@670 -- # es=1 00:12:10.907 14:16:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:10.907 ************************************ 00:12:10.907 END TEST dd_invalid_input_count 00:12:10.907 ************************************ 00:12:10.907 00:12:10.907 real 0m2.076s 00:12:10.907 user 0m1.687s 00:12:10.907 sys 0m0.342s 00:12:10.907 14:16:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:10.907 14:16:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:12:11.166 14:16:38 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@231 -- # run_test dd_invalid_output_count invalid_output_count 00:12:11.166 14:16:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:11.166 14:16:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:11.166 14:16:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:11.166 ************************************ 00:12:11.166 START TEST dd_invalid_output_count 00:12:11.166 ************************************ 00:12:11.166 14:16:38 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1127 -- # invalid_output_count 00:12:11.166 14:16:38 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@173 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:12:11.166 14:16:38 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:12:11.166 14:16:38 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # local -A method_bdev_malloc_create_0 00:12:11.166 14:16:38 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:12:11.166 14:16:38 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # gen_conf 00:12:11.166 14:16:38 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@650 -- # local es=0 00:12:11.166 14:16:38 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:12:11.166 14:16:38 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/common.sh@31 -- # xtrace_disable 00:12:11.166 14:16:38 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:11.166 14:16:38 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:12:11.166 14:16:38 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:11.166 14:16:38 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:11.166 14:16:38 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:11.166 14:16:38 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:11.166 14:16:38 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:11.166 14:16:38 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:11.166 14:16:38 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:11.166 14:16:38 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:12:11.166 { 00:12:11.166 "subsystems": [ 00:12:11.166 { 00:12:11.166 "subsystem": "bdev", 00:12:11.166 "config": [ 00:12:11.166 { 00:12:11.166 "params": { 00:12:11.166 "block_size": 512, 00:12:11.166 "num_blocks": 512, 00:12:11.166 "name": "malloc0" 00:12:11.166 }, 00:12:11.166 "method": "bdev_malloc_create" 00:12:11.166 }, 00:12:11.166 { 00:12:11.166 "method": "bdev_wait_for_examine" 00:12:11.166 } 00:12:11.166 ] 00:12:11.166 } 00:12:11.166 ] 00:12:11.166 } 00:12:11.166 [2024-11-06 14:16:38.693671] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:12:11.166 [2024-11-06 14:16:38.694002] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65095 ] 00:12:11.424 [2024-11-06 14:16:38.881380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:11.424 [2024-11-06 14:16:39.004785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:11.682 [2024-11-06 14:16:39.219389] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:11.941 [2024-11-06 14:16:39.357181] spdk_dd.c:1152:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available in output 00:12:11.941 [2024-11-06 14:16:39.357263] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:12.882 [2024-11-06 14:16:40.256527] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:12:13.140 14:16:40 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@653 -- # es=228 00:12:13.140 14:16:40 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:13.140 14:16:40 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@662 -- # es=100 00:12:13.140 14:16:40 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@663 -- # case "$es" in 00:12:13.140 14:16:40 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@670 -- # es=1 00:12:13.140 14:16:40 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:13.140 00:12:13.140 real 0m1.972s 00:12:13.140 user 0m1.629s 00:12:13.140 sys 0m0.291s 00:12:13.140 14:16:40 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:13.140 14:16:40 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:12:13.140 ************************************ 00:12:13.140 END TEST dd_invalid_output_count 00:12:13.140 ************************************ 00:12:13.140 14:16:40 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@232 -- # run_test dd_bs_not_multiple bs_not_multiple 00:12:13.140 14:16:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:13.140 14:16:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:13.140 14:16:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:13.140 ************************************ 00:12:13.140 START TEST dd_bs_not_multiple 00:12:13.140 ************************************ 00:12:13.140 14:16:40 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1127 -- # bs_not_multiple 00:12:13.140 14:16:40 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@190 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:12:13.140 14:16:40 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:12:13.140 14:16:40 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # local -A method_bdev_malloc_create_0 00:12:13.140 14:16:40 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@196 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:12:13.140 14:16:40 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:12:13.141 14:16:40 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # local -A method_bdev_malloc_create_1 00:12:13.141 14:16:40 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:12:13.141 14:16:40 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # gen_conf 00:12:13.141 14:16:40 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@650 -- # local es=0 00:12:13.141 14:16:40 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:12:13.141 14:16:40 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/common.sh@31 -- # xtrace_disable 00:12:13.141 14:16:40 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:13.141 14:16:40 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:12:13.141 14:16:40 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:13.141 14:16:40 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:13.141 14:16:40 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:13.141 14:16:40 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:13.141 14:16:40 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:13.141 14:16:40 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:13.141 14:16:40 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:13.141 14:16:40 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:12:13.141 { 00:12:13.141 "subsystems": [ 00:12:13.141 { 00:12:13.141 "subsystem": "bdev", 00:12:13.141 "config": [ 00:12:13.141 { 00:12:13.141 "params": { 00:12:13.141 "block_size": 512, 00:12:13.141 "num_blocks": 512, 00:12:13.141 "name": "malloc0" 00:12:13.141 }, 00:12:13.141 "method": "bdev_malloc_create" 00:12:13.141 }, 00:12:13.141 { 00:12:13.141 "params": { 00:12:13.141 "block_size": 512, 00:12:13.141 "num_blocks": 512, 00:12:13.141 "name": "malloc1" 00:12:13.141 }, 00:12:13.141 "method": "bdev_malloc_create" 00:12:13.141 }, 00:12:13.141 { 00:12:13.141 "method": "bdev_wait_for_examine" 00:12:13.141 } 00:12:13.141 ] 00:12:13.141 } 00:12:13.141 ] 00:12:13.141 } 00:12:13.141 [2024-11-06 14:16:40.744171] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:12:13.141 [2024-11-06 14:16:40.744317] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65140 ] 00:12:13.399 [2024-11-06 14:16:40.935009] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:13.657 [2024-11-06 14:16:41.064940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:13.657 [2024-11-06 14:16:41.285695] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:13.915 [2024-11-06 14:16:41.439843] spdk_dd.c:1168:dd_run: *ERROR*: --bs value must be a multiple of input native block size (512) 00:12:13.915 [2024-11-06 14:16:41.439961] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:14.850 [2024-11-06 14:16:42.370300] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:12:15.109 14:16:42 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@653 -- # es=234 00:12:15.109 14:16:42 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:15.109 14:16:42 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@662 -- # es=106 00:12:15.109 ************************************ 00:12:15.109 END TEST dd_bs_not_multiple 00:12:15.109 ************************************ 00:12:15.109 14:16:42 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@663 -- # case "$es" in 00:12:15.109 14:16:42 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@670 -- # es=1 00:12:15.109 14:16:42 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:15.109 00:12:15.109 real 0m2.027s 00:12:15.109 user 0m1.711s 00:12:15.109 sys 0m0.271s 00:12:15.109 14:16:42 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:15.109 14:16:42 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:12:15.109 00:12:15.109 real 0m18.958s 00:12:15.109 user 0m13.518s 00:12:15.109 sys 0m4.834s 00:12:15.109 14:16:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:15.109 14:16:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:15.109 ************************************ 00:12:15.109 END TEST spdk_dd_negative 00:12:15.109 ************************************ 00:12:15.377 ************************************ 00:12:15.377 END TEST spdk_dd 00:12:15.377 ************************************ 00:12:15.377 00:12:15.377 real 3m34.216s 00:12:15.377 user 2m51.733s 00:12:15.377 sys 1m22.198s 00:12:15.377 14:16:42 spdk_dd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:15.377 14:16:42 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:12:15.377 14:16:42 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:12:15.377 14:16:42 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:12:15.377 14:16:42 -- spdk/autotest.sh@256 -- # timing_exit lib 00:12:15.377 14:16:42 -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:15.377 14:16:42 -- common/autotest_common.sh@10 -- # set +x 00:12:15.377 14:16:42 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:12:15.377 14:16:42 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:12:15.377 14:16:42 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:12:15.377 14:16:42 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:12:15.377 14:16:42 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:12:15.377 14:16:42 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:12:15.377 14:16:42 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:12:15.377 14:16:42 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:15.377 14:16:42 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:15.377 14:16:42 -- common/autotest_common.sh@10 -- # set +x 00:12:15.377 ************************************ 00:12:15.377 START TEST nvmf_tcp 00:12:15.377 ************************************ 00:12:15.377 14:16:42 nvmf_tcp -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:12:15.649 * Looking for test storage... 00:12:15.649 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:12:15.649 14:16:43 nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:15.649 14:16:43 nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:12:15.649 14:16:43 nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:15.649 14:16:43 nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:15.649 14:16:43 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:15.649 14:16:43 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:15.649 14:16:43 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:15.649 14:16:43 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:12:15.649 14:16:43 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:12:15.649 14:16:43 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:12:15.649 14:16:43 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:12:15.649 14:16:43 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:12:15.649 14:16:43 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:12:15.649 14:16:43 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:12:15.649 14:16:43 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:15.649 14:16:43 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:12:15.649 14:16:43 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:12:15.649 14:16:43 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:15.649 14:16:43 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:15.649 14:16:43 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:12:15.649 14:16:43 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:12:15.649 14:16:43 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:15.649 14:16:43 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:12:15.649 14:16:43 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:12:15.649 14:16:43 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:12:15.649 14:16:43 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:12:15.649 14:16:43 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:15.649 14:16:43 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:12:15.649 14:16:43 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:12:15.649 14:16:43 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:15.649 14:16:43 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:15.649 14:16:43 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:12:15.649 14:16:43 nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:15.649 14:16:43 nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:15.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.649 --rc genhtml_branch_coverage=1 00:12:15.649 --rc genhtml_function_coverage=1 00:12:15.649 --rc genhtml_legend=1 00:12:15.649 --rc geninfo_all_blocks=1 00:12:15.649 --rc geninfo_unexecuted_blocks=1 00:12:15.649 00:12:15.649 ' 00:12:15.649 14:16:43 nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:15.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.649 --rc genhtml_branch_coverage=1 00:12:15.649 --rc genhtml_function_coverage=1 00:12:15.649 --rc genhtml_legend=1 00:12:15.649 --rc geninfo_all_blocks=1 00:12:15.649 --rc geninfo_unexecuted_blocks=1 00:12:15.649 00:12:15.649 ' 00:12:15.649 14:16:43 nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:15.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.649 --rc genhtml_branch_coverage=1 00:12:15.649 --rc genhtml_function_coverage=1 00:12:15.649 --rc genhtml_legend=1 00:12:15.649 --rc geninfo_all_blocks=1 00:12:15.649 --rc geninfo_unexecuted_blocks=1 00:12:15.649 00:12:15.649 ' 00:12:15.649 14:16:43 nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:15.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.649 --rc genhtml_branch_coverage=1 00:12:15.649 --rc genhtml_function_coverage=1 00:12:15.649 --rc genhtml_legend=1 00:12:15.649 --rc geninfo_all_blocks=1 00:12:15.649 --rc geninfo_unexecuted_blocks=1 00:12:15.649 00:12:15.649 ' 00:12:15.649 14:16:43 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:12:15.649 14:16:43 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:12:15.649 14:16:43 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:12:15.649 14:16:43 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:15.649 14:16:43 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:15.649 14:16:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:15.649 ************************************ 00:12:15.649 START TEST nvmf_target_core 00:12:15.649 ************************************ 00:12:15.649 14:16:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:12:15.909 * Looking for test storage... 00:12:15.909 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:12:15.909 14:16:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:15.909 14:16:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lcov --version 00:12:15.909 14:16:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:15.909 14:16:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:15.909 14:16:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:15.909 14:16:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:15.909 14:16:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:15.909 14:16:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:12:15.909 14:16:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:12:15.909 14:16:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:12:15.909 14:16:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:12:15.909 14:16:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:12:15.909 14:16:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:12:15.909 14:16:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:12:15.909 14:16:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:15.909 14:16:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:12:15.909 14:16:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:12:15.909 14:16:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:15.909 14:16:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:15.909 14:16:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:12:15.909 14:16:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:12:15.909 14:16:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:15.909 14:16:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:12:15.909 14:16:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:12:15.909 14:16:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:12:15.909 14:16:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:12:15.909 14:16:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:15.909 14:16:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:12:15.909 14:16:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:12:15.909 14:16:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:15.909 14:16:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:15.909 14:16:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:12:15.909 14:16:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:15.909 14:16:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:15.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.909 --rc genhtml_branch_coverage=1 00:12:15.909 --rc genhtml_function_coverage=1 00:12:15.909 --rc genhtml_legend=1 00:12:15.909 --rc geninfo_all_blocks=1 00:12:15.909 --rc geninfo_unexecuted_blocks=1 00:12:15.909 00:12:15.909 ' 00:12:15.909 14:16:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:15.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.909 --rc genhtml_branch_coverage=1 00:12:15.909 --rc genhtml_function_coverage=1 00:12:15.909 --rc genhtml_legend=1 00:12:15.909 --rc geninfo_all_blocks=1 00:12:15.909 --rc geninfo_unexecuted_blocks=1 00:12:15.909 00:12:15.909 ' 00:12:15.909 14:16:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:15.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.909 --rc genhtml_branch_coverage=1 00:12:15.909 --rc genhtml_function_coverage=1 00:12:15.909 --rc genhtml_legend=1 00:12:15.909 --rc geninfo_all_blocks=1 00:12:15.909 --rc geninfo_unexecuted_blocks=1 00:12:15.909 00:12:15.909 ' 00:12:15.909 14:16:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:15.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.909 --rc genhtml_branch_coverage=1 00:12:15.909 --rc genhtml_function_coverage=1 00:12:15.909 --rc genhtml_legend=1 00:12:15.909 --rc geninfo_all_blocks=1 00:12:15.909 --rc geninfo_unexecuted_blocks=1 00:12:15.909 00:12:15.909 ' 00:12:15.909 14:16:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:12:15.909 14:16:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:12:15.909 14:16:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:15.909 14:16:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:12:15.910 14:16:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:15.910 14:16:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:15.910 14:16:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:15.910 14:16:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:15.910 14:16:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:15.910 14:16:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:15.910 14:16:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:15.910 14:16:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:15.910 14:16:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:15.910 14:16:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:15.910 14:16:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:12:15.910 14:16:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:12:15.910 14:16:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:15.910 14:16:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:15.910 14:16:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:15.910 14:16:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:15.910 14:16:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:15.910 14:16:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:12:15.910 14:16:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:15.910 14:16:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:15.910 14:16:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:15.910 14:16:43 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.910 14:16:43 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.910 14:16:43 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.910 14:16:43 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:12:15.910 14:16:43 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.910 14:16:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:12:15.910 14:16:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:15.910 14:16:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:15.910 14:16:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:15.910 14:16:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:15.910 14:16:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:15.910 14:16:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:15.910 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:15.910 14:16:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:15.910 14:16:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:15.910 14:16:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:15.910 14:16:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:12:15.910 14:16:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:12:15.910 14:16:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 1 -eq 0 ]] 00:12:15.910 14:16:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:12:15.910 14:16:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:15.910 14:16:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:15.910 14:16:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:15.910 ************************************ 00:12:15.910 START TEST nvmf_host_management 00:12:15.910 ************************************ 00:12:15.910 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:12:16.170 * Looking for test storage... 00:12:16.170 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:16.170 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:16.170 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:12:16.170 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:16.170 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:16.170 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:16.170 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:16.170 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:16.170 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:12:16.170 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:12:16.170 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:12:16.170 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:12:16.170 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:12:16.170 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:12:16.170 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:12:16.170 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:16.170 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:12:16.170 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:12:16.170 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:16.170 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:16.170 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:12:16.170 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:12:16.170 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:16.170 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:12:16.170 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:12:16.170 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:12:16.170 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:12:16.170 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:16.170 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:12:16.170 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:12:16.170 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:16.170 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:16.170 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:12:16.170 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:16.170 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:16.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.170 --rc genhtml_branch_coverage=1 00:12:16.170 --rc genhtml_function_coverage=1 00:12:16.170 --rc genhtml_legend=1 00:12:16.170 --rc geninfo_all_blocks=1 00:12:16.170 --rc geninfo_unexecuted_blocks=1 00:12:16.170 00:12:16.170 ' 00:12:16.170 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:16.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.170 --rc genhtml_branch_coverage=1 00:12:16.170 --rc genhtml_function_coverage=1 00:12:16.170 --rc genhtml_legend=1 00:12:16.170 --rc geninfo_all_blocks=1 00:12:16.170 --rc geninfo_unexecuted_blocks=1 00:12:16.170 00:12:16.170 ' 00:12:16.170 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:16.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.170 --rc genhtml_branch_coverage=1 00:12:16.170 --rc genhtml_function_coverage=1 00:12:16.170 --rc genhtml_legend=1 00:12:16.170 --rc geninfo_all_blocks=1 00:12:16.170 --rc geninfo_unexecuted_blocks=1 00:12:16.170 00:12:16.170 ' 00:12:16.170 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:16.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.170 --rc genhtml_branch_coverage=1 00:12:16.170 --rc genhtml_function_coverage=1 00:12:16.170 --rc genhtml_legend=1 00:12:16.170 --rc geninfo_all_blocks=1 00:12:16.170 --rc geninfo_unexecuted_blocks=1 00:12:16.170 00:12:16.170 ' 00:12:16.170 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:16.170 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:12:16.170 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:16.170 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:16.170 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:16.170 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:16.170 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:16.170 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:16.170 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:16.170 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:16.171 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:16.171 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:16.171 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:12:16.171 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:12:16.171 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:16.171 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:16.171 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:16.171 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:16.171 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:16.171 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:12:16.171 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:16.171 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:16.171 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:16.171 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.171 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.171 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.171 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:12:16.171 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.171 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:12:16.171 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:16.171 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:16.171 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:16.171 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:16.171 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:16.171 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:16.171 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:16.171 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:16.171 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:16.171 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:16.171 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:16.171 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:16.171 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:12:16.171 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:16.171 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:16.171 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:16.171 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:16.171 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:16.171 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:16.171 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:16.171 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:16.431 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:16.431 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:16.431 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:16.431 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:16.431 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:16.431 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:16.431 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:16.431 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:16.431 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:16.431 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:16.431 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:16.431 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:16.431 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:16.431 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:16.431 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:16.431 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:16.431 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:16.431 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:16.431 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:16.431 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:16.431 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:16.431 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:16.431 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:16.431 Cannot find device "nvmf_init_br" 00:12:16.431 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:12:16.431 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:16.431 Cannot find device "nvmf_init_br2" 00:12:16.431 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:12:16.431 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:16.431 Cannot find device "nvmf_tgt_br" 00:12:16.431 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:12:16.431 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:16.431 Cannot find device "nvmf_tgt_br2" 00:12:16.431 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:12:16.431 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:16.431 Cannot find device "nvmf_init_br" 00:12:16.431 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:12:16.431 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:16.431 Cannot find device "nvmf_init_br2" 00:12:16.431 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:12:16.431 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:16.431 Cannot find device "nvmf_tgt_br" 00:12:16.431 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:12:16.431 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:16.431 Cannot find device "nvmf_tgt_br2" 00:12:16.431 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:12:16.431 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:16.431 Cannot find device "nvmf_br" 00:12:16.431 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:12:16.431 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:16.431 Cannot find device "nvmf_init_if" 00:12:16.431 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:12:16.431 14:16:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:16.431 Cannot find device "nvmf_init_if2" 00:12:16.431 14:16:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:12:16.431 14:16:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:16.431 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:16.431 14:16:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:12:16.431 14:16:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:16.431 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:16.431 14:16:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:12:16.431 14:16:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:16.431 14:16:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:16.690 14:16:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:16.690 14:16:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:16.690 14:16:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:16.690 14:16:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:16.690 14:16:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:16.690 14:16:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:16.691 14:16:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:16.691 14:16:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:16.691 14:16:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:16.691 14:16:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:16.691 14:16:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:16.691 14:16:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:16.691 14:16:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:16.691 14:16:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:16.691 14:16:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:16.691 14:16:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:16.691 14:16:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:16.691 14:16:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:16.691 14:16:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:16.691 14:16:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:16.691 14:16:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:16.691 14:16:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:16.691 14:16:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:16.691 14:16:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:16.691 14:16:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:16.691 14:16:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:16.950 14:16:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:16.950 14:16:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:16.950 14:16:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:16.950 14:16:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:16.950 14:16:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:16.950 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:16.950 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.230 ms 00:12:16.950 00:12:16.950 --- 10.0.0.3 ping statistics --- 00:12:16.950 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:16.950 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:12:16.950 14:16:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:16.950 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:16.950 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.113 ms 00:12:16.950 00:12:16.950 --- 10.0.0.4 ping statistics --- 00:12:16.950 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:16.950 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:12:16.950 14:16:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:16.950 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:16.950 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:12:16.950 00:12:16.950 --- 10.0.0.1 ping statistics --- 00:12:16.950 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:16.950 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:12:16.950 14:16:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:16.950 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:16.950 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.099 ms 00:12:16.950 00:12:16.950 --- 10.0.0.2 ping statistics --- 00:12:16.950 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:16.950 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:12:16.950 14:16:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:16.950 14:16:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 00:12:16.950 14:16:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:16.950 14:16:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:16.950 14:16:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:16.950 14:16:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:16.950 14:16:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:16.950 14:16:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:16.950 14:16:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:16.950 14:16:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:12:16.950 14:16:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:12:16.950 14:16:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:12:16.950 14:16:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:16.950 14:16:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:16.950 14:16:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:16.950 14:16:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=65500 00:12:16.950 14:16:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:12:16.950 14:16:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 65500 00:12:16.950 14:16:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 65500 ']' 00:12:16.950 14:16:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:16.950 14:16:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:16.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:16.950 14:16:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:16.950 14:16:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:16.950 14:16:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:16.950 [2024-11-06 14:16:44.578656] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:12:16.950 [2024-11-06 14:16:44.578834] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:17.208 [2024-11-06 14:16:44.772782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:17.468 [2024-11-06 14:16:44.912916] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:17.468 [2024-11-06 14:16:44.913000] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:17.468 [2024-11-06 14:16:44.913018] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:17.468 [2024-11-06 14:16:44.913031] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:17.468 [2024-11-06 14:16:44.913045] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:17.468 [2024-11-06 14:16:44.915348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:17.468 [2024-11-06 14:16:44.915582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:17.468 [2024-11-06 14:16:44.915594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:12:17.468 [2024-11-06 14:16:44.915478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:17.726 [2024-11-06 14:16:45.157991] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:17.984 14:16:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:17.984 14:16:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:12:17.984 14:16:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:17.984 14:16:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:17.984 14:16:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:17.984 14:16:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:17.984 14:16:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:17.984 14:16:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.984 14:16:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:17.984 [2024-11-06 14:16:45.487450] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:17.984 14:16:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.984 14:16:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:12:17.984 14:16:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:17.984 14:16:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:17.984 14:16:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:12:17.984 14:16:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:12:17.984 14:16:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:12:17.984 14:16:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.985 14:16:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:18.242 Malloc0 00:12:18.242 [2024-11-06 14:16:45.639150] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:18.242 14:16:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.242 14:16:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:12:18.242 14:16:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:18.242 14:16:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:18.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:18.242 14:16:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=65554 00:12:18.242 14:16:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 65554 /var/tmp/bdevperf.sock 00:12:18.242 14:16:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 65554 ']' 00:12:18.242 14:16:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:12:18.242 14:16:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:12:18.242 14:16:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:18.242 14:16:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:12:18.242 14:16:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:18.242 14:16:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:12:18.242 14:16:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:18.242 14:16:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:18.242 14:16:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:18.242 14:16:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:18.242 { 00:12:18.242 "params": { 00:12:18.242 "name": "Nvme$subsystem", 00:12:18.242 "trtype": "$TEST_TRANSPORT", 00:12:18.242 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:18.242 "adrfam": "ipv4", 00:12:18.242 "trsvcid": "$NVMF_PORT", 00:12:18.242 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:18.242 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:18.242 "hdgst": ${hdgst:-false}, 00:12:18.242 "ddgst": ${ddgst:-false} 00:12:18.242 }, 00:12:18.242 "method": "bdev_nvme_attach_controller" 00:12:18.242 } 00:12:18.242 EOF 00:12:18.242 )") 00:12:18.242 14:16:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:18.242 14:16:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:12:18.242 14:16:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:12:18.242 14:16:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:12:18.242 14:16:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:18.242 "params": { 00:12:18.242 "name": "Nvme0", 00:12:18.242 "trtype": "tcp", 00:12:18.242 "traddr": "10.0.0.3", 00:12:18.242 "adrfam": "ipv4", 00:12:18.243 "trsvcid": "4420", 00:12:18.243 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:18.243 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:12:18.243 "hdgst": false, 00:12:18.243 "ddgst": false 00:12:18.243 }, 00:12:18.243 "method": "bdev_nvme_attach_controller" 00:12:18.243 }' 00:12:18.243 [2024-11-06 14:16:45.823959] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:12:18.243 [2024-11-06 14:16:45.824122] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65554 ] 00:12:18.500 [2024-11-06 14:16:46.016367] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:18.758 [2024-11-06 14:16:46.165049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:19.016 [2024-11-06 14:16:46.414210] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:19.016 Running I/O for 10 seconds... 00:12:19.275 14:16:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:19.275 14:16:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:12:19.275 14:16:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:12:19.275 14:16:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.275 14:16:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:19.275 14:16:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.275 14:16:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:19.275 14:16:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:12:19.275 14:16:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:12:19.275 14:16:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:12:19.275 14:16:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:12:19.275 14:16:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:12:19.275 14:16:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:12:19.275 14:16:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:12:19.275 14:16:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:12:19.275 14:16:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.275 14:16:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:19.275 14:16:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:12:19.275 14:16:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.275 14:16:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:12:19.275 14:16:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:12:19.275 14:16:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:12:19.534 14:16:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:12:19.534 14:16:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:12:19.534 14:16:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:12:19.534 14:16:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:12:19.534 14:16:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.534 14:16:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:19.534 14:16:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.534 14:16:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=643 00:12:19.534 14:16:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 643 -ge 100 ']' 00:12:19.534 14:16:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:12:19.534 14:16:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:12:19.534 14:16:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:12:19.534 14:16:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:12:19.534 14:16:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.534 14:16:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:19.534 [2024-11-06 14:16:47.070887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:19.534 [2024-11-06 14:16:47.070948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.535 [2024-11-06 14:16:47.070990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:90240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:19.535 [2024-11-06 14:16:47.071008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.535 [2024-11-06 14:16:47.071029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:90368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:19.535 [2024-11-06 14:16:47.071046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.535 [2024-11-06 14:16:47.071073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:90496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:19.535 [2024-11-06 14:16:47.071096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.535 [2024-11-06 14:16:47.071124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:90624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:19.535 [2024-11-06 14:16:47.071147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.535 [2024-11-06 14:16:47.071174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:90752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:19.535 [2024-11-06 14:16:47.071197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.535 [2024-11-06 14:16:47.071224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:90880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:19.535 [2024-11-06 14:16:47.071245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.535 [2024-11-06 14:16:47.071271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:91008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:19.535 [2024-11-06 14:16:47.071294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.535 [2024-11-06 14:16:47.071321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:91136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:19.535 [2024-11-06 14:16:47.071343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.535 [2024-11-06 14:16:47.071370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:91264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:19.535 [2024-11-06 14:16:47.071394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.535 [2024-11-06 14:16:47.071422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:91392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:19.535 [2024-11-06 14:16:47.071446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.535 [2024-11-06 14:16:47.071471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:91520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:19.535 [2024-11-06 14:16:47.071493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.535 [2024-11-06 14:16:47.071519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:91648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:19.535 [2024-11-06 14:16:47.071552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.535 [2024-11-06 14:16:47.071590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:91776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:19.535 [2024-11-06 14:16:47.071611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.535 [2024-11-06 14:16:47.071635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:91904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:19.535 [2024-11-06 14:16:47.071654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.535 [2024-11-06 14:16:47.071679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:92032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:19.535 [2024-11-06 14:16:47.071699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.535 [2024-11-06 14:16:47.071723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:92160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:19.535 [2024-11-06 14:16:47.071744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.535 [2024-11-06 14:16:47.071779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:92288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:19.535 [2024-11-06 14:16:47.071800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.535 [2024-11-06 14:16:47.071824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:92416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:19.535 [2024-11-06 14:16:47.071844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.535 [2024-11-06 14:16:47.071888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:92544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:19.535 [2024-11-06 14:16:47.071910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.535 [2024-11-06 14:16:47.071934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:92672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:19.535 [2024-11-06 14:16:47.071955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.535 [2024-11-06 14:16:47.071978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:92800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:19.535 [2024-11-06 14:16:47.071998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.535 [2024-11-06 14:16:47.072021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:92928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:19.535 [2024-11-06 14:16:47.072041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.535 [2024-11-06 14:16:47.072067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:93056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:19.535 [2024-11-06 14:16:47.072090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.535 [2024-11-06 14:16:47.072117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:93184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:19.535 [2024-11-06 14:16:47.072140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.535 [2024-11-06 14:16:47.072166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:93312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:19.535 [2024-11-06 14:16:47.072190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.535 [2024-11-06 14:16:47.072216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:93440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:19.535 [2024-11-06 14:16:47.072238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.535 [2024-11-06 14:16:47.072261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:93568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:19.535 [2024-11-06 14:16:47.072282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.535 [2024-11-06 14:16:47.072304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:93696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:19.535 [2024-11-06 14:16:47.072325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.535 [2024-11-06 14:16:47.072346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:93824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:19.535 [2024-11-06 14:16:47.072367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.535 [2024-11-06 14:16:47.072394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:93952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:19.535 [2024-11-06 14:16:47.072415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.535 [2024-11-06 14:16:47.072438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:94080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:19.535 [2024-11-06 14:16:47.072460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.535 [2024-11-06 14:16:47.072484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:94208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:19.535 [2024-11-06 14:16:47.072504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.535 [2024-11-06 14:16:47.072536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:94336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:19.535 [2024-11-06 14:16:47.072558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.535 [2024-11-06 14:16:47.072584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:94464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:19.535 [2024-11-06 14:16:47.072606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.535 [2024-11-06 14:16:47.072632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:94592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:19.535 [2024-11-06 14:16:47.072654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.535 [2024-11-06 14:16:47.072681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:94720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:19.535 [2024-11-06 14:16:47.072703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.535 [2024-11-06 14:16:47.072775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:94848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:19.535 [2024-11-06 14:16:47.072800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.535 [2024-11-06 14:16:47.072826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:94976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:19.535 [2024-11-06 14:16:47.072861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.535 [2024-11-06 14:16:47.072882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:95104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:19.536 [2024-11-06 14:16:47.072898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.536 [2024-11-06 14:16:47.072917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:95232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:19.536 [2024-11-06 14:16:47.072932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.536 [2024-11-06 14:16:47.072951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:95360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:19.536 [2024-11-06 14:16:47.072966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.536 [2024-11-06 14:16:47.072985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:95488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:19.536 [2024-11-06 14:16:47.073000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.536 [2024-11-06 14:16:47.073018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:95616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:19.536 [2024-11-06 14:16:47.073033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.536 [2024-11-06 14:16:47.073051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:95744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:19.536 [2024-11-06 14:16:47.073066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.536 [2024-11-06 14:16:47.073083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:95872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:19.536 [2024-11-06 14:16:47.073099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.536 [2024-11-06 14:16:47.073116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:19.536 [2024-11-06 14:16:47.073131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.536 [2024-11-06 14:16:47.073148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:19.536 [2024-11-06 14:16:47.073164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.536 [2024-11-06 14:16:47.073181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:19.536 [2024-11-06 14:16:47.073196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.536 [2024-11-06 14:16:47.073217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:19.536 [2024-11-06 14:16:47.073232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.536 [2024-11-06 14:16:47.073253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:19.536 [2024-11-06 14:16:47.073274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.536 [2024-11-06 14:16:47.073289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:19.536 [2024-11-06 14:16:47.073301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.536 [2024-11-06 14:16:47.073316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:19.536 [2024-11-06 14:16:47.073328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.536 [2024-11-06 14:16:47.073341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:19.536 [2024-11-06 14:16:47.073353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.536 [2024-11-06 14:16:47.073367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:19.536 [2024-11-06 14:16:47.073378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.536 [2024-11-06 14:16:47.073392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:97152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:19.536 [2024-11-06 14:16:47.073403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.536 [2024-11-06 14:16:47.073417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:19.536 [2024-11-06 14:16:47.073429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.536 [2024-11-06 14:16:47.073443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:19.536 [2024-11-06 14:16:47.073454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.536 [2024-11-06 14:16:47.073468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:97536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:19.536 [2024-11-06 14:16:47.073480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.536 [2024-11-06 14:16:47.073494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:97664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:19.536 [2024-11-06 14:16:47.073506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.536 [2024-11-06 14:16:47.073520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:19.536 [2024-11-06 14:16:47.073532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.536 [2024-11-06 14:16:47.073546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:19.536 [2024-11-06 14:16:47.073558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.536 [2024-11-06 14:16:47.073572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:19.536 [2024-11-06 14:16:47.073585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.536 [2024-11-06 14:16:47.073600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:19.536 [2024-11-06 14:16:47.073612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.536 [2024-11-06 14:16:47.073625] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b780 is same with the state(6) to be set 00:12:19.536 [2024-11-06 14:16:47.074102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:12:19.536 [2024-11-06 14:16:47.074128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.536 [2024-11-06 14:16:47.074144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:12:19.536 [2024-11-06 14:16:47.074156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.536 [2024-11-06 14:16:47.074169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:12:19.536 [2024-11-06 14:16:47.074182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.536 [2024-11-06 14:16:47.074195] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:12:19.536 [2024-11-06 14:16:47.074206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.536 [2024-11-06 14:16:47.074218] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ad80 is same with the state(6) to be set 00:12:19.536 [2024-11-06 14:16:47.075381] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:12:19.536 task offset: 90112 on job bdev=Nvme0n1 fails 00:12:19.536 00:12:19.536 Latency(us) 00:12:19.536 [2024-11-06T14:16:47.171Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:19.536 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:12:19.536 Job: Nvme0n1 ended in about 0.43 seconds with error 00:12:19.536 Verification LBA range: start 0x0 length 0x400 00:12:19.536 Nvme0n1 : 0.43 1621.63 101.35 147.42 0.00 35184.66 3868.99 35373.65 00:12:19.536 [2024-11-06T14:16:47.171Z] =================================================================================================================== 00:12:19.536 [2024-11-06T14:16:47.171Z] Total : 1621.63 101.35 147.42 0.00 35184.66 3868.99 35373.65 00:12:19.536 14:16:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.536 14:16:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:12:19.536 [2024-11-06 14:16:47.080603] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:19.536 [2024-11-06 14:16:47.080656] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:12:19.536 14:16:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.536 14:16:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:19.536 [2024-11-06 14:16:47.090291] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:12:19.537 14:16:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.537 14:16:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:12:20.471 14:16:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 65554 00:12:20.471 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (65554) - No such process 00:12:20.471 14:16:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:12:20.471 14:16:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:12:20.471 14:16:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:12:20.471 14:16:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:12:20.471 14:16:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:12:20.471 14:16:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:12:20.471 14:16:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:20.471 14:16:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:20.471 { 00:12:20.471 "params": { 00:12:20.471 "name": "Nvme$subsystem", 00:12:20.471 "trtype": "$TEST_TRANSPORT", 00:12:20.471 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:20.471 "adrfam": "ipv4", 00:12:20.471 "trsvcid": "$NVMF_PORT", 00:12:20.471 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:20.471 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:20.471 "hdgst": ${hdgst:-false}, 00:12:20.471 "ddgst": ${ddgst:-false} 00:12:20.471 }, 00:12:20.471 "method": "bdev_nvme_attach_controller" 00:12:20.471 } 00:12:20.471 EOF 00:12:20.471 )") 00:12:20.729 14:16:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:12:20.729 14:16:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:12:20.729 14:16:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:12:20.729 14:16:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:20.729 "params": { 00:12:20.729 "name": "Nvme0", 00:12:20.729 "trtype": "tcp", 00:12:20.729 "traddr": "10.0.0.3", 00:12:20.729 "adrfam": "ipv4", 00:12:20.729 "trsvcid": "4420", 00:12:20.729 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:20.729 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:12:20.729 "hdgst": false, 00:12:20.729 "ddgst": false 00:12:20.729 }, 00:12:20.729 "method": "bdev_nvme_attach_controller" 00:12:20.729 }' 00:12:20.729 [2024-11-06 14:16:48.210047] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:12:20.729 [2024-11-06 14:16:48.210185] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65605 ] 00:12:20.988 [2024-11-06 14:16:48.397570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:20.988 [2024-11-06 14:16:48.532709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:21.246 [2024-11-06 14:16:48.764358] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:21.504 Running I/O for 1 seconds... 00:12:22.438 1664.00 IOPS, 104.00 MiB/s 00:12:22.438 Latency(us) 00:12:22.438 [2024-11-06T14:16:50.073Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:22.438 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:12:22.438 Verification LBA range: start 0x0 length 0x400 00:12:22.438 Nvme0n1 : 1.02 1688.21 105.51 0.00 0.00 37295.05 4526.98 45901.52 00:12:22.438 [2024-11-06T14:16:50.073Z] =================================================================================================================== 00:12:22.438 [2024-11-06T14:16:50.073Z] Total : 1688.21 105.51 0.00 0.00 37295.05 4526.98 45901.52 00:12:23.813 14:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:12:23.813 14:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:12:23.813 14:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:12:23.813 14:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:12:23.813 14:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:12:23.813 14:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:23.813 14:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:12:23.813 14:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:23.813 14:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:12:23.813 14:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:23.813 14:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:23.813 rmmod nvme_tcp 00:12:23.813 rmmod nvme_fabrics 00:12:23.813 rmmod nvme_keyring 00:12:23.813 14:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:23.813 14:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:12:23.813 14:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:12:23.813 14:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 65500 ']' 00:12:23.813 14:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 65500 00:12:23.813 14:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' -z 65500 ']' 00:12:23.813 14:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # kill -0 65500 00:12:23.813 14:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # uname 00:12:23.813 14:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:23.813 14:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65500 00:12:23.813 killing process with pid 65500 00:12:23.813 14:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:12:23.813 14:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:12:23.813 14:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65500' 00:12:23.813 14:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@971 -- # kill 65500 00:12:23.813 14:16:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@976 -- # wait 65500 00:12:25.192 [2024-11-06 14:16:52.594045] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:12:25.192 14:16:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:25.192 14:16:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:25.192 14:16:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:25.192 14:16:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:12:25.192 14:16:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:12:25.192 14:16:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:25.192 14:16:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:12:25.192 14:16:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:25.192 14:16:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:25.192 14:16:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:25.192 14:16:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:25.192 14:16:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:25.192 14:16:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:25.192 14:16:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:25.192 14:16:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:25.192 14:16:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:25.192 14:16:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:25.192 14:16:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:25.450 14:16:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:25.450 14:16:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:25.450 14:16:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:25.450 14:16:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:25.450 14:16:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:25.450 14:16:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:25.450 14:16:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:25.450 14:16:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:25.450 14:16:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:12:25.450 14:16:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:12:25.450 00:12:25.450 real 0m9.495s 00:12:25.450 user 0m34.503s 00:12:25.450 sys 0m2.419s 00:12:25.450 14:16:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:25.450 14:16:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:25.450 ************************************ 00:12:25.450 END TEST nvmf_host_management 00:12:25.450 ************************************ 00:12:25.450 14:16:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:12:25.450 14:16:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:25.450 14:16:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:25.450 14:16:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:25.450 ************************************ 00:12:25.450 START TEST nvmf_lvol 00:12:25.450 ************************************ 00:12:25.450 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:12:25.710 * Looking for test storage... 00:12:25.710 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:25.710 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:25.710 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:12:25.710 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:25.710 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:25.710 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:25.710 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:25.710 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:25.710 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:12:25.710 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:12:25.710 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:12:25.710 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:12:25.710 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:12:25.710 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:12:25.710 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:12:25.710 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:25.710 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:12:25.710 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:12:25.710 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:25.710 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:25.710 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:12:25.710 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:12:25.710 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:25.710 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:12:25.710 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:12:25.710 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:12:25.710 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:12:25.710 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:25.710 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:12:25.710 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:12:25.710 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:25.710 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:25.710 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:12:25.710 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:25.710 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:25.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:25.710 --rc genhtml_branch_coverage=1 00:12:25.710 --rc genhtml_function_coverage=1 00:12:25.710 --rc genhtml_legend=1 00:12:25.710 --rc geninfo_all_blocks=1 00:12:25.710 --rc geninfo_unexecuted_blocks=1 00:12:25.710 00:12:25.710 ' 00:12:25.710 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:25.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:25.710 --rc genhtml_branch_coverage=1 00:12:25.710 --rc genhtml_function_coverage=1 00:12:25.710 --rc genhtml_legend=1 00:12:25.710 --rc geninfo_all_blocks=1 00:12:25.710 --rc geninfo_unexecuted_blocks=1 00:12:25.710 00:12:25.710 ' 00:12:25.710 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:25.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:25.710 --rc genhtml_branch_coverage=1 00:12:25.710 --rc genhtml_function_coverage=1 00:12:25.710 --rc genhtml_legend=1 00:12:25.710 --rc geninfo_all_blocks=1 00:12:25.710 --rc geninfo_unexecuted_blocks=1 00:12:25.710 00:12:25.710 ' 00:12:25.710 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:25.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:25.710 --rc genhtml_branch_coverage=1 00:12:25.710 --rc genhtml_function_coverage=1 00:12:25.710 --rc genhtml_legend=1 00:12:25.710 --rc geninfo_all_blocks=1 00:12:25.710 --rc geninfo_unexecuted_blocks=1 00:12:25.710 00:12:25.710 ' 00:12:25.710 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:25.710 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:12:25.710 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:25.710 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:25.710 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:25.710 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:25.710 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:25.710 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:25.710 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:25.710 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:25.710 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:25.710 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:25.710 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:12:25.710 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:12:25.710 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:25.710 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:25.710 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:25.710 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:25.710 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:25.710 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:12:25.710 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:25.710 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:25.710 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:25.710 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.711 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.711 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.711 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:12:25.711 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.711 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:12:25.711 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:25.711 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:25.711 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:25.711 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:25.711 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:25.711 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:25.711 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:25.711 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:25.711 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:25.711 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:25.711 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:25.711 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:25.711 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:12:25.711 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:12:25.711 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:25.711 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:12:25.711 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:25.711 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:25.711 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:25.711 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:25.711 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:25.711 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:25.711 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:25.711 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:25.711 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:25.711 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:25.711 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:25.711 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:25.711 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:25.711 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:25.711 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:25.711 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:25.711 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:25.711 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:25.711 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:25.711 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:25.711 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:25.711 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:25.711 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:25.711 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:25.711 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:25.711 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:25.711 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:25.711 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:25.711 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:25.711 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:25.711 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:25.969 Cannot find device "nvmf_init_br" 00:12:25.969 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:12:25.969 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:25.969 Cannot find device "nvmf_init_br2" 00:12:25.969 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:12:25.969 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:25.969 Cannot find device "nvmf_tgt_br" 00:12:25.969 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:12:25.969 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:25.969 Cannot find device "nvmf_tgt_br2" 00:12:25.969 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:12:25.969 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:25.969 Cannot find device "nvmf_init_br" 00:12:25.969 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:12:25.969 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:25.969 Cannot find device "nvmf_init_br2" 00:12:25.969 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:12:25.969 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:25.969 Cannot find device "nvmf_tgt_br" 00:12:25.969 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:12:25.969 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:25.969 Cannot find device "nvmf_tgt_br2" 00:12:25.969 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:12:25.969 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:25.969 Cannot find device "nvmf_br" 00:12:25.969 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:12:25.969 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:25.969 Cannot find device "nvmf_init_if" 00:12:25.969 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:12:25.969 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:25.969 Cannot find device "nvmf_init_if2" 00:12:25.969 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:12:25.969 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:25.969 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:25.969 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:12:25.969 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:25.969 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:25.969 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:12:25.969 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:25.969 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:25.969 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:25.969 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:25.969 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:25.969 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:26.226 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:26.226 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:26.226 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:26.227 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:26.227 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:26.227 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:26.227 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:26.227 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:26.227 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:26.227 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:26.227 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:26.227 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:26.227 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:26.227 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:26.227 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:26.227 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:26.227 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:26.227 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:26.227 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:26.227 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:26.227 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:26.227 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:26.227 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:26.227 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:26.227 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:26.227 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:26.227 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:26.227 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:26.227 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.126 ms 00:12:26.227 00:12:26.227 --- 10.0.0.3 ping statistics --- 00:12:26.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:26.227 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:12:26.227 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:26.227 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:26.227 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.088 ms 00:12:26.227 00:12:26.227 --- 10.0.0.4 ping statistics --- 00:12:26.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:26.227 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:12:26.227 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:26.227 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:26.227 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:12:26.227 00:12:26.227 --- 10.0.0.1 ping statistics --- 00:12:26.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:26.227 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:12:26.227 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:26.227 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:26.227 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:12:26.227 00:12:26.227 --- 10.0.0.2 ping statistics --- 00:12:26.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:26.227 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:12:26.227 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:26.227 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 00:12:26.227 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:26.227 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:26.227 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:26.227 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:26.227 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:26.227 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:26.227 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:26.485 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:12:26.485 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:26.485 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:26.485 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:26.485 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=65918 00:12:26.485 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:12:26.485 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 65918 00:12:26.485 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@833 -- # '[' -z 65918 ']' 00:12:26.485 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:26.485 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:26.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:26.485 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:26.485 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:26.485 14:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:26.486 [2024-11-06 14:16:53.989822] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:12:26.486 [2024-11-06 14:16:53.989986] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:26.743 [2024-11-06 14:16:54.181061] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:26.743 [2024-11-06 14:16:54.312055] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:26.743 [2024-11-06 14:16:54.312131] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:26.743 [2024-11-06 14:16:54.312149] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:26.743 [2024-11-06 14:16:54.312162] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:26.743 [2024-11-06 14:16:54.312177] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:26.743 [2024-11-06 14:16:54.314289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:26.743 [2024-11-06 14:16:54.314463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.743 [2024-11-06 14:16:54.314500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:27.000 [2024-11-06 14:16:54.541467] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:27.257 14:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:27.258 14:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@866 -- # return 0 00:12:27.258 14:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:27.258 14:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:27.258 14:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:27.258 14:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:27.258 14:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:27.515 [2024-11-06 14:16:55.118095] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:27.515 14:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:28.080 14:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:12:28.080 14:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:28.337 14:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:12:28.337 14:16:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:12:28.594 14:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:12:28.852 14:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=fe339ce9-479e-4daf-a5b3-7705aac19da7 00:12:28.852 14:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u fe339ce9-479e-4daf-a5b3-7705aac19da7 lvol 20 00:12:29.109 14:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=29ab6e53-d3a8-460b-8aad-692a6390068d 00:12:29.109 14:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:29.366 14:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 29ab6e53-d3a8-460b-8aad-692a6390068d 00:12:29.625 14:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:12:29.883 [2024-11-06 14:16:57.259363] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:29.883 14:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:12:30.143 14:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=65994 00:12:30.143 14:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:12:30.143 14:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:12:31.124 14:16:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 29ab6e53-d3a8-460b-8aad-692a6390068d MY_SNAPSHOT 00:12:31.383 14:16:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=2d8b1c1d-ceaa-4c54-bd42-dcdbb1814103 00:12:31.383 14:16:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 29ab6e53-d3a8-460b-8aad-692a6390068d 30 00:12:31.642 14:16:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 2d8b1c1d-ceaa-4c54-bd42-dcdbb1814103 MY_CLONE 00:12:31.901 14:16:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=4c79aa33-56af-4b1f-90ac-8d56539c895e 00:12:31.901 14:16:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 4c79aa33-56af-4b1f-90ac-8d56539c895e 00:12:32.500 14:16:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 65994 00:12:40.611 Initializing NVMe Controllers 00:12:40.611 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:12:40.611 Controller IO queue size 128, less than required. 00:12:40.611 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:40.611 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:12:40.611 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:12:40.611 Initialization complete. Launching workers. 00:12:40.611 ======================================================== 00:12:40.611 Latency(us) 00:12:40.611 Device Information : IOPS MiB/s Average min max 00:12:40.611 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 9878.80 38.59 12966.22 310.65 203085.51 00:12:40.611 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 9805.50 38.30 13055.93 3265.60 164170.05 00:12:40.611 ======================================================== 00:12:40.611 Total : 19684.30 76.89 13010.90 310.65 203085.51 00:12:40.611 00:12:40.611 14:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:40.611 14:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 29ab6e53-d3a8-460b-8aad-692a6390068d 00:12:40.869 14:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fe339ce9-479e-4daf-a5b3-7705aac19da7 00:12:41.128 14:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:12:41.128 14:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:12:41.128 14:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:12:41.128 14:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:41.128 14:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:12:41.128 14:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:41.128 14:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:12:41.128 14:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:41.128 14:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:41.128 rmmod nvme_tcp 00:12:41.128 rmmod nvme_fabrics 00:12:41.128 rmmod nvme_keyring 00:12:41.386 14:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:41.386 14:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:12:41.386 14:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:12:41.386 14:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 65918 ']' 00:12:41.386 14:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 65918 00:12:41.386 14:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' -z 65918 ']' 00:12:41.386 14:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # kill -0 65918 00:12:41.386 14:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # uname 00:12:41.386 14:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:41.386 14:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65918 00:12:41.386 14:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:41.386 killing process with pid 65918 00:12:41.386 14:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:41.386 14:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65918' 00:12:41.386 14:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@971 -- # kill 65918 00:12:41.386 14:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@976 -- # wait 65918 00:12:42.759 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:42.759 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:42.759 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:42.759 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:12:42.759 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:12:42.759 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:12:42.759 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:43.017 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:43.017 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:43.017 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:43.017 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:43.017 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:43.017 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:43.017 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:43.017 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:43.017 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:43.017 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:43.017 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:43.017 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:43.017 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:43.017 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:43.017 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:43.017 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:43.017 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:43.017 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:43.017 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:43.017 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:12:43.017 00:12:43.017 real 0m17.561s 00:12:43.017 user 1m7.072s 00:12:43.017 sys 0m6.004s 00:12:43.017 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:43.017 ************************************ 00:12:43.017 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:43.017 END TEST nvmf_lvol 00:12:43.017 ************************************ 00:12:43.276 14:17:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:12:43.276 14:17:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:43.276 14:17:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:43.276 14:17:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:43.276 ************************************ 00:12:43.276 START TEST nvmf_lvs_grow 00:12:43.276 ************************************ 00:12:43.276 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:12:43.276 * Looking for test storage... 00:12:43.276 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:43.276 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:43.276 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:12:43.276 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:43.276 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:43.276 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:43.276 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:43.276 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:43.276 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:12:43.276 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:12:43.276 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:12:43.276 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:12:43.276 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:12:43.276 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:12:43.276 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:12:43.276 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:43.276 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:12:43.276 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:12:43.276 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:43.276 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:43.276 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:12:43.276 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:12:43.276 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:43.276 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:12:43.276 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:12:43.276 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:12:43.276 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:12:43.276 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:43.276 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:12:43.276 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:12:43.276 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:43.276 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:43.276 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:12:43.276 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:43.276 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:43.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:43.276 --rc genhtml_branch_coverage=1 00:12:43.276 --rc genhtml_function_coverage=1 00:12:43.276 --rc genhtml_legend=1 00:12:43.276 --rc geninfo_all_blocks=1 00:12:43.276 --rc geninfo_unexecuted_blocks=1 00:12:43.276 00:12:43.276 ' 00:12:43.276 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:43.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:43.276 --rc genhtml_branch_coverage=1 00:12:43.276 --rc genhtml_function_coverage=1 00:12:43.276 --rc genhtml_legend=1 00:12:43.276 --rc geninfo_all_blocks=1 00:12:43.276 --rc geninfo_unexecuted_blocks=1 00:12:43.276 00:12:43.277 ' 00:12:43.277 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:43.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:43.277 --rc genhtml_branch_coverage=1 00:12:43.277 --rc genhtml_function_coverage=1 00:12:43.277 --rc genhtml_legend=1 00:12:43.277 --rc geninfo_all_blocks=1 00:12:43.277 --rc geninfo_unexecuted_blocks=1 00:12:43.277 00:12:43.277 ' 00:12:43.277 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:43.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:43.277 --rc genhtml_branch_coverage=1 00:12:43.277 --rc genhtml_function_coverage=1 00:12:43.277 --rc genhtml_legend=1 00:12:43.277 --rc geninfo_all_blocks=1 00:12:43.277 --rc geninfo_unexecuted_blocks=1 00:12:43.277 00:12:43.277 ' 00:12:43.277 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:43.277 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:12:43.536 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:43.536 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:43.536 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:43.536 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:43.536 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:43.536 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:43.536 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:43.536 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:43.536 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:43.536 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:43.536 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:12:43.536 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:12:43.536 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:43.536 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:43.536 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:43.536 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:43.536 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:43.536 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:12:43.536 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:43.536 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:43.536 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:43.536 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.536 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.536 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.536 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:12:43.536 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.536 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:12:43.536 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:43.536 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:43.536 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:43.536 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:43.536 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:43.536 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:43.536 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:43.536 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:43.536 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:43.536 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:43.536 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:43.536 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:43.536 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:12:43.536 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:43.536 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:43.536 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:43.536 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:43.537 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:43.537 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:43.537 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:43.537 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:43.537 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:43.537 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:43.537 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:43.537 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:43.537 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:43.537 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:43.537 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:43.537 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:43.537 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:43.537 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:43.537 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:43.537 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:43.537 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:43.537 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:43.537 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:43.537 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:43.537 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:43.537 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:43.537 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:43.537 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:43.537 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:43.537 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:43.537 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:43.537 Cannot find device "nvmf_init_br" 00:12:43.537 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:12:43.537 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:43.537 Cannot find device "nvmf_init_br2" 00:12:43.537 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:12:43.537 14:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:43.537 Cannot find device "nvmf_tgt_br" 00:12:43.537 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:12:43.537 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:43.537 Cannot find device "nvmf_tgt_br2" 00:12:43.537 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:12:43.537 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:43.537 Cannot find device "nvmf_init_br" 00:12:43.537 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:12:43.537 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:43.537 Cannot find device "nvmf_init_br2" 00:12:43.537 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:12:43.537 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:43.537 Cannot find device "nvmf_tgt_br" 00:12:43.537 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:12:43.537 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:43.537 Cannot find device "nvmf_tgt_br2" 00:12:43.537 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:12:43.537 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:43.537 Cannot find device "nvmf_br" 00:12:43.537 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:12:43.537 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:43.537 Cannot find device "nvmf_init_if" 00:12:43.537 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:12:43.537 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:43.537 Cannot find device "nvmf_init_if2" 00:12:43.537 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:12:43.537 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:43.537 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:43.537 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:12:43.537 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:43.537 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:43.537 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:12:43.537 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:43.798 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:43.798 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:43.798 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:43.798 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:43.798 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:43.798 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:43.798 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:43.798 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:43.798 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:43.798 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:43.798 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:43.798 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:43.798 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:43.798 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:43.798 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:43.798 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:43.798 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:43.798 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:43.798 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:43.798 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:43.798 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:43.798 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:43.798 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:43.798 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:43.798 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:43.798 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:43.798 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:43.798 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:43.798 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:43.798 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:43.798 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:43.798 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:43.798 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:43.798 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:12:43.798 00:12:43.798 --- 10.0.0.3 ping statistics --- 00:12:43.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:43.798 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:12:43.798 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:43.798 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:43.798 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.103 ms 00:12:43.798 00:12:43.798 --- 10.0.0.4 ping statistics --- 00:12:43.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:43.798 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:12:43.798 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:43.798 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:43.798 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:12:43.798 00:12:43.798 --- 10.0.0.1 ping statistics --- 00:12:43.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:43.798 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:12:43.798 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:43.798 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:43.798 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.115 ms 00:12:43.798 00:12:43.798 --- 10.0.0.2 ping statistics --- 00:12:43.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:43.798 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:12:43.798 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:43.798 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 00:12:43.798 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:43.798 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:43.798 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:43.798 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:43.798 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:43.798 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:43.798 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:44.057 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:12:44.057 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:44.057 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:44.057 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:44.057 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=66389 00:12:44.057 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:44.057 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 66389 00:12:44.057 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # '[' -z 66389 ']' 00:12:44.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:44.057 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:44.058 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:44.058 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:44.058 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:44.058 14:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:44.058 [2024-11-06 14:17:11.566107] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:12:44.058 [2024-11-06 14:17:11.566240] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:44.316 [2024-11-06 14:17:11.756095] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:44.316 [2024-11-06 14:17:11.881075] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:44.316 [2024-11-06 14:17:11.881154] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:44.316 [2024-11-06 14:17:11.881172] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:44.316 [2024-11-06 14:17:11.881200] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:44.316 [2024-11-06 14:17:11.881215] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:44.316 [2024-11-06 14:17:11.882596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:44.574 [2024-11-06 14:17:12.104813] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:44.832 14:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:44.832 14:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@866 -- # return 0 00:12:44.832 14:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:44.832 14:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:44.832 14:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:44.832 14:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:44.832 14:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:45.091 [2024-11-06 14:17:12.645059] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:45.091 14:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:12:45.091 14:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:45.091 14:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:45.091 14:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:45.091 ************************************ 00:12:45.091 START TEST lvs_grow_clean 00:12:45.091 ************************************ 00:12:45.091 14:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1127 -- # lvs_grow 00:12:45.091 14:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:12:45.091 14:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:12:45.091 14:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:12:45.091 14:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:12:45.091 14:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:12:45.091 14:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:12:45.091 14:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:12:45.091 14:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:12:45.091 14:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:45.350 14:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:12:45.350 14:17:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:12:45.609 14:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=16f9eca1-23af-4485-bc7f-10a0109455ee 00:12:45.609 14:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:12:45.609 14:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 16f9eca1-23af-4485-bc7f-10a0109455ee 00:12:45.868 14:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:12:45.868 14:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:12:45.868 14:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 16f9eca1-23af-4485-bc7f-10a0109455ee lvol 150 00:12:46.127 14:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=2b345415-267c-4238-9a53-5fbd0edadab7 00:12:46.127 14:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:12:46.127 14:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:12:46.386 [2024-11-06 14:17:13.884749] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:12:46.386 [2024-11-06 14:17:13.884896] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:12:46.386 true 00:12:46.386 14:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 16f9eca1-23af-4485-bc7f-10a0109455ee 00:12:46.386 14:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:12:46.645 14:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:12:46.645 14:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:46.905 14:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2b345415-267c-4238-9a53-5fbd0edadab7 00:12:47.192 14:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:12:47.192 [2024-11-06 14:17:14.737412] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:47.192 14:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:12:47.450 14:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=66473 00:12:47.450 14:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:12:47.450 14:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:47.450 14:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 66473 /var/tmp/bdevperf.sock 00:12:47.450 14:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # '[' -z 66473 ']' 00:12:47.450 14:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:47.450 14:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:47.450 14:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:47.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:47.450 14:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:47.450 14:17:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:12:47.450 [2024-11-06 14:17:15.070325] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:12:47.450 [2024-11-06 14:17:15.070479] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66473 ] 00:12:47.709 [2024-11-06 14:17:15.256288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:47.968 [2024-11-06 14:17:15.380409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:47.968 [2024-11-06 14:17:15.592419] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:48.535 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:48.535 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@866 -- # return 0 00:12:48.535 14:17:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:12:48.794 Nvme0n1 00:12:48.794 14:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:12:48.794 [ 00:12:48.794 { 00:12:48.794 "name": "Nvme0n1", 00:12:48.794 "aliases": [ 00:12:48.794 "2b345415-267c-4238-9a53-5fbd0edadab7" 00:12:48.794 ], 00:12:48.794 "product_name": "NVMe disk", 00:12:48.794 "block_size": 4096, 00:12:48.794 "num_blocks": 38912, 00:12:48.794 "uuid": "2b345415-267c-4238-9a53-5fbd0edadab7", 00:12:48.794 "numa_id": -1, 00:12:48.794 "assigned_rate_limits": { 00:12:48.794 "rw_ios_per_sec": 0, 00:12:48.794 "rw_mbytes_per_sec": 0, 00:12:48.794 "r_mbytes_per_sec": 0, 00:12:48.794 "w_mbytes_per_sec": 0 00:12:48.794 }, 00:12:48.794 "claimed": false, 00:12:48.794 "zoned": false, 00:12:48.794 "supported_io_types": { 00:12:48.794 "read": true, 00:12:48.794 "write": true, 00:12:48.794 "unmap": true, 00:12:48.794 "flush": true, 00:12:48.794 "reset": true, 00:12:48.794 "nvme_admin": true, 00:12:48.794 "nvme_io": true, 00:12:48.794 "nvme_io_md": false, 00:12:48.794 "write_zeroes": true, 00:12:48.794 "zcopy": false, 00:12:48.794 "get_zone_info": false, 00:12:48.794 "zone_management": false, 00:12:48.794 "zone_append": false, 00:12:48.794 "compare": true, 00:12:48.794 "compare_and_write": true, 00:12:48.794 "abort": true, 00:12:48.794 "seek_hole": false, 00:12:48.794 "seek_data": false, 00:12:48.794 "copy": true, 00:12:48.794 "nvme_iov_md": false 00:12:48.794 }, 00:12:48.795 "memory_domains": [ 00:12:48.795 { 00:12:48.795 "dma_device_id": "system", 00:12:48.795 "dma_device_type": 1 00:12:48.795 } 00:12:48.795 ], 00:12:48.795 "driver_specific": { 00:12:48.795 "nvme": [ 00:12:48.795 { 00:12:48.795 "trid": { 00:12:48.795 "trtype": "TCP", 00:12:48.795 "adrfam": "IPv4", 00:12:48.795 "traddr": "10.0.0.3", 00:12:48.795 "trsvcid": "4420", 00:12:48.795 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:12:48.795 }, 00:12:48.795 "ctrlr_data": { 00:12:48.795 "cntlid": 1, 00:12:48.795 "vendor_id": "0x8086", 00:12:48.795 "model_number": "SPDK bdev Controller", 00:12:48.795 "serial_number": "SPDK0", 00:12:48.795 "firmware_revision": "25.01", 00:12:48.795 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:48.795 "oacs": { 00:12:48.795 "security": 0, 00:12:48.795 "format": 0, 00:12:48.795 "firmware": 0, 00:12:48.795 "ns_manage": 0 00:12:48.795 }, 00:12:48.795 "multi_ctrlr": true, 00:12:48.795 "ana_reporting": false 00:12:48.795 }, 00:12:48.795 "vs": { 00:12:48.795 "nvme_version": "1.3" 00:12:48.795 }, 00:12:48.795 "ns_data": { 00:12:48.795 "id": 1, 00:12:48.795 "can_share": true 00:12:48.795 } 00:12:48.795 } 00:12:48.795 ], 00:12:48.795 "mp_policy": "active_passive" 00:12:48.795 } 00:12:48.795 } 00:12:48.795 ] 00:12:49.054 14:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:49.054 14:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=66491 00:12:49.054 14:17:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:12:49.054 Running I/O for 10 seconds... 00:12:49.991 Latency(us) 00:12:49.991 [2024-11-06T14:17:17.626Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:49.991 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:49.991 Nvme0n1 : 1.00 7409.00 28.94 0.00 0.00 0.00 0.00 0.00 00:12:49.991 [2024-11-06T14:17:17.626Z] =================================================================================================================== 00:12:49.991 [2024-11-06T14:17:17.626Z] Total : 7409.00 28.94 0.00 0.00 0.00 0.00 0.00 00:12:49.991 00:12:50.927 14:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 16f9eca1-23af-4485-bc7f-10a0109455ee 00:12:50.927 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:50.927 Nvme0n1 : 2.00 7416.50 28.97 0.00 0.00 0.00 0.00 0.00 00:12:50.927 [2024-11-06T14:17:18.562Z] =================================================================================================================== 00:12:50.927 [2024-11-06T14:17:18.562Z] Total : 7416.50 28.97 0.00 0.00 0.00 0.00 0.00 00:12:50.927 00:12:51.186 true 00:12:51.186 14:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 16f9eca1-23af-4485-bc7f-10a0109455ee 00:12:51.186 14:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:12:51.445 14:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:12:51.445 14:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:12:51.445 14:17:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 66491 00:12:52.013 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:52.013 Nvme0n1 : 3.00 7484.33 29.24 0.00 0.00 0.00 0.00 0.00 00:12:52.013 [2024-11-06T14:17:19.648Z] =================================================================================================================== 00:12:52.013 [2024-11-06T14:17:19.648Z] Total : 7484.33 29.24 0.00 0.00 0.00 0.00 0.00 00:12:52.013 00:12:52.948 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:52.948 Nvme0n1 : 4.00 7486.50 29.24 0.00 0.00 0.00 0.00 0.00 00:12:52.948 [2024-11-06T14:17:20.583Z] =================================================================================================================== 00:12:52.948 [2024-11-06T14:17:20.583Z] Total : 7486.50 29.24 0.00 0.00 0.00 0.00 0.00 00:12:52.948 00:12:54.323 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:54.323 Nvme0n1 : 5.00 7437.00 29.05 0.00 0.00 0.00 0.00 0.00 00:12:54.323 [2024-11-06T14:17:21.958Z] =================================================================================================================== 00:12:54.323 [2024-11-06T14:17:21.958Z] Total : 7437.00 29.05 0.00 0.00 0.00 0.00 0.00 00:12:54.323 00:12:55.258 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:55.259 Nvme0n1 : 6.00 7382.83 28.84 0.00 0.00 0.00 0.00 0.00 00:12:55.259 [2024-11-06T14:17:22.894Z] =================================================================================================================== 00:12:55.259 [2024-11-06T14:17:22.894Z] Total : 7382.83 28.84 0.00 0.00 0.00 0.00 0.00 00:12:55.259 00:12:56.194 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:56.194 Nvme0n1 : 7.00 7344.14 28.69 0.00 0.00 0.00 0.00 0.00 00:12:56.194 [2024-11-06T14:17:23.829Z] =================================================================================================================== 00:12:56.194 [2024-11-06T14:17:23.829Z] Total : 7344.14 28.69 0.00 0.00 0.00 0.00 0.00 00:12:56.194 00:12:57.128 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:57.128 Nvme0n1 : 8.00 7346.88 28.70 0.00 0.00 0.00 0.00 0.00 00:12:57.128 [2024-11-06T14:17:24.763Z] =================================================================================================================== 00:12:57.128 [2024-11-06T14:17:24.763Z] Total : 7346.88 28.70 0.00 0.00 0.00 0.00 0.00 00:12:57.128 00:12:58.061 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:58.061 Nvme0n1 : 9.00 7326.33 28.62 0.00 0.00 0.00 0.00 0.00 00:12:58.061 [2024-11-06T14:17:25.696Z] =================================================================================================================== 00:12:58.061 [2024-11-06T14:17:25.696Z] Total : 7326.33 28.62 0.00 0.00 0.00 0.00 0.00 00:12:58.061 00:12:58.998 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:58.998 Nvme0n1 : 10.00 7343.00 28.68 0.00 0.00 0.00 0.00 0.00 00:12:58.998 [2024-11-06T14:17:26.633Z] =================================================================================================================== 00:12:58.998 [2024-11-06T14:17:26.633Z] Total : 7343.00 28.68 0.00 0.00 0.00 0.00 0.00 00:12:58.998 00:12:58.998 00:12:58.998 Latency(us) 00:12:58.998 [2024-11-06T14:17:26.633Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:58.998 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:58.998 Nvme0n1 : 10.00 7352.77 28.72 0.00 0.00 17403.99 7843.26 64009.46 00:12:58.998 [2024-11-06T14:17:26.633Z] =================================================================================================================== 00:12:58.998 [2024-11-06T14:17:26.633Z] Total : 7352.77 28.72 0.00 0.00 17403.99 7843.26 64009.46 00:12:58.998 { 00:12:58.998 "results": [ 00:12:58.998 { 00:12:58.998 "job": "Nvme0n1", 00:12:58.998 "core_mask": "0x2", 00:12:58.998 "workload": "randwrite", 00:12:58.998 "status": "finished", 00:12:58.998 "queue_depth": 128, 00:12:58.998 "io_size": 4096, 00:12:58.998 "runtime": 10.004122, 00:12:58.998 "iops": 7352.769188540484, 00:12:58.998 "mibps": 28.721754642736265, 00:12:58.998 "io_failed": 0, 00:12:58.998 "io_timeout": 0, 00:12:58.998 "avg_latency_us": 17403.99471385092, 00:12:58.998 "min_latency_us": 7843.264257028112, 00:12:58.998 "max_latency_us": 64009.458634538154 00:12:58.998 } 00:12:58.998 ], 00:12:58.998 "core_count": 1 00:12:58.998 } 00:12:58.998 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 66473 00:12:58.998 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' -z 66473 ']' 00:12:58.998 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # kill -0 66473 00:12:58.998 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # uname 00:12:58.998 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:58.998 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 66473 00:12:58.998 killing process with pid 66473 00:12:58.998 Received shutdown signal, test time was about 10.000000 seconds 00:12:58.998 00:12:58.998 Latency(us) 00:12:58.998 [2024-11-06T14:17:26.633Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:58.998 [2024-11-06T14:17:26.633Z] =================================================================================================================== 00:12:58.998 [2024-11-06T14:17:26.633Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:58.998 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:12:58.998 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:12:58.998 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 66473' 00:12:58.998 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # kill 66473 00:12:58.998 14:17:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@976 -- # wait 66473 00:13:00.390 14:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:13:00.390 14:17:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:00.650 14:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 16f9eca1-23af-4485-bc7f-10a0109455ee 00:13:00.650 14:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:13:00.910 14:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:13:00.910 14:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:13:00.910 14:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:01.170 [2024-11-06 14:17:28.640159] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:13:01.170 14:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 16f9eca1-23af-4485-bc7f-10a0109455ee 00:13:01.170 14:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:13:01.170 14:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 16f9eca1-23af-4485-bc7f-10a0109455ee 00:13:01.170 14:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:01.170 14:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:01.170 14:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:01.170 14:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:01.170 14:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:01.170 14:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:01.170 14:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:01.170 14:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:13:01.170 14:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 16f9eca1-23af-4485-bc7f-10a0109455ee 00:13:01.450 request: 00:13:01.450 { 00:13:01.450 "uuid": "16f9eca1-23af-4485-bc7f-10a0109455ee", 00:13:01.450 "method": "bdev_lvol_get_lvstores", 00:13:01.450 "req_id": 1 00:13:01.450 } 00:13:01.450 Got JSON-RPC error response 00:13:01.450 response: 00:13:01.450 { 00:13:01.450 "code": -19, 00:13:01.450 "message": "No such device" 00:13:01.450 } 00:13:01.450 14:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:13:01.450 14:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:01.450 14:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:01.450 14:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:01.450 14:17:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:01.730 aio_bdev 00:13:01.730 14:17:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 2b345415-267c-4238-9a53-5fbd0edadab7 00:13:01.730 14:17:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local bdev_name=2b345415-267c-4238-9a53-5fbd0edadab7 00:13:01.730 14:17:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:01.730 14:17:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local i 00:13:01.730 14:17:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:01.730 14:17:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:01.730 14:17:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:01.989 14:17:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2b345415-267c-4238-9a53-5fbd0edadab7 -t 2000 00:13:01.989 [ 00:13:01.989 { 00:13:01.989 "name": "2b345415-267c-4238-9a53-5fbd0edadab7", 00:13:01.989 "aliases": [ 00:13:01.989 "lvs/lvol" 00:13:01.989 ], 00:13:01.989 "product_name": "Logical Volume", 00:13:01.989 "block_size": 4096, 00:13:01.989 "num_blocks": 38912, 00:13:01.989 "uuid": "2b345415-267c-4238-9a53-5fbd0edadab7", 00:13:01.989 "assigned_rate_limits": { 00:13:01.989 "rw_ios_per_sec": 0, 00:13:01.989 "rw_mbytes_per_sec": 0, 00:13:01.989 "r_mbytes_per_sec": 0, 00:13:01.989 "w_mbytes_per_sec": 0 00:13:01.989 }, 00:13:01.989 "claimed": false, 00:13:01.989 "zoned": false, 00:13:01.989 "supported_io_types": { 00:13:01.989 "read": true, 00:13:01.989 "write": true, 00:13:01.989 "unmap": true, 00:13:01.989 "flush": false, 00:13:01.989 "reset": true, 00:13:01.989 "nvme_admin": false, 00:13:01.989 "nvme_io": false, 00:13:01.989 "nvme_io_md": false, 00:13:01.989 "write_zeroes": true, 00:13:01.989 "zcopy": false, 00:13:01.989 "get_zone_info": false, 00:13:01.989 "zone_management": false, 00:13:01.989 "zone_append": false, 00:13:01.989 "compare": false, 00:13:01.989 "compare_and_write": false, 00:13:01.989 "abort": false, 00:13:01.989 "seek_hole": true, 00:13:01.989 "seek_data": true, 00:13:01.989 "copy": false, 00:13:01.989 "nvme_iov_md": false 00:13:01.989 }, 00:13:01.989 "driver_specific": { 00:13:01.989 "lvol": { 00:13:01.989 "lvol_store_uuid": "16f9eca1-23af-4485-bc7f-10a0109455ee", 00:13:01.989 "base_bdev": "aio_bdev", 00:13:01.989 "thin_provision": false, 00:13:01.989 "num_allocated_clusters": 38, 00:13:01.989 "snapshot": false, 00:13:01.989 "clone": false, 00:13:01.989 "esnap_clone": false 00:13:01.989 } 00:13:01.989 } 00:13:01.989 } 00:13:01.989 ] 00:13:01.989 14:17:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@909 -- # return 0 00:13:01.989 14:17:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:13:01.989 14:17:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 16f9eca1-23af-4485-bc7f-10a0109455ee 00:13:02.249 14:17:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:13:02.249 14:17:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:13:02.249 14:17:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 16f9eca1-23af-4485-bc7f-10a0109455ee 00:13:02.507 14:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:13:02.507 14:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 2b345415-267c-4238-9a53-5fbd0edadab7 00:13:02.766 14:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 16f9eca1-23af-4485-bc7f-10a0109455ee 00:13:03.026 14:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:03.284 14:17:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:13:03.852 ************************************ 00:13:03.852 END TEST lvs_grow_clean 00:13:03.852 ************************************ 00:13:03.852 00:13:03.852 real 0m18.521s 00:13:03.852 user 0m16.638s 00:13:03.852 sys 0m3.135s 00:13:03.852 14:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:03.852 14:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:13:03.852 14:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:13:03.852 14:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:03.852 14:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:03.852 14:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:03.852 ************************************ 00:13:03.852 START TEST lvs_grow_dirty 00:13:03.852 ************************************ 00:13:03.852 14:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1127 -- # lvs_grow dirty 00:13:03.852 14:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:13:03.852 14:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:13:03.852 14:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:13:03.852 14:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:13:03.852 14:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:13:03.852 14:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:13:03.852 14:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:13:03.852 14:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:13:03.852 14:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:04.110 14:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:13:04.110 14:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:13:04.368 14:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=6640b4ce-4c9e-4b01-b2eb-add5bb86b3dd 00:13:04.368 14:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6640b4ce-4c9e-4b01-b2eb-add5bb86b3dd 00:13:04.368 14:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:13:04.368 14:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:13:04.368 14:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:13:04.368 14:17:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 6640b4ce-4c9e-4b01-b2eb-add5bb86b3dd lvol 150 00:13:04.626 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=e5784731-967b-425c-89ca-646db107035f 00:13:04.626 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:13:04.626 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:13:04.885 [2024-11-06 14:17:32.400785] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:13:04.885 [2024-11-06 14:17:32.400941] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:13:04.885 true 00:13:04.885 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:13:04.885 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6640b4ce-4c9e-4b01-b2eb-add5bb86b3dd 00:13:05.144 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:13:05.144 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:05.404 14:17:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e5784731-967b-425c-89ca-646db107035f 00:13:05.662 14:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:13:05.920 [2024-11-06 14:17:33.371946] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:05.920 14:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:13:06.179 14:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=66743 00:13:06.179 14:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:13:06.179 14:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:06.179 14:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 66743 /var/tmp/bdevperf.sock 00:13:06.179 14:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 66743 ']' 00:13:06.179 14:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:06.179 14:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:06.179 14:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:06.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:06.179 14:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:06.179 14:17:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:06.179 [2024-11-06 14:17:33.714513] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:13:06.179 [2024-11-06 14:17:33.714666] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66743 ] 00:13:06.438 [2024-11-06 14:17:33.898151] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:06.438 [2024-11-06 14:17:34.028576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:06.697 [2024-11-06 14:17:34.250825] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:06.956 14:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:06.956 14:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:13:06.956 14:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:13:07.214 Nvme0n1 00:13:07.474 14:17:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:13:07.474 [ 00:13:07.474 { 00:13:07.474 "name": "Nvme0n1", 00:13:07.474 "aliases": [ 00:13:07.474 "e5784731-967b-425c-89ca-646db107035f" 00:13:07.474 ], 00:13:07.474 "product_name": "NVMe disk", 00:13:07.474 "block_size": 4096, 00:13:07.474 "num_blocks": 38912, 00:13:07.474 "uuid": "e5784731-967b-425c-89ca-646db107035f", 00:13:07.474 "numa_id": -1, 00:13:07.474 "assigned_rate_limits": { 00:13:07.474 "rw_ios_per_sec": 0, 00:13:07.474 "rw_mbytes_per_sec": 0, 00:13:07.474 "r_mbytes_per_sec": 0, 00:13:07.474 "w_mbytes_per_sec": 0 00:13:07.474 }, 00:13:07.474 "claimed": false, 00:13:07.474 "zoned": false, 00:13:07.474 "supported_io_types": { 00:13:07.474 "read": true, 00:13:07.474 "write": true, 00:13:07.474 "unmap": true, 00:13:07.474 "flush": true, 00:13:07.474 "reset": true, 00:13:07.474 "nvme_admin": true, 00:13:07.474 "nvme_io": true, 00:13:07.474 "nvme_io_md": false, 00:13:07.474 "write_zeroes": true, 00:13:07.474 "zcopy": false, 00:13:07.474 "get_zone_info": false, 00:13:07.474 "zone_management": false, 00:13:07.474 "zone_append": false, 00:13:07.474 "compare": true, 00:13:07.474 "compare_and_write": true, 00:13:07.474 "abort": true, 00:13:07.474 "seek_hole": false, 00:13:07.474 "seek_data": false, 00:13:07.474 "copy": true, 00:13:07.474 "nvme_iov_md": false 00:13:07.474 }, 00:13:07.474 "memory_domains": [ 00:13:07.474 { 00:13:07.474 "dma_device_id": "system", 00:13:07.474 "dma_device_type": 1 00:13:07.474 } 00:13:07.474 ], 00:13:07.474 "driver_specific": { 00:13:07.474 "nvme": [ 00:13:07.474 { 00:13:07.474 "trid": { 00:13:07.474 "trtype": "TCP", 00:13:07.474 "adrfam": "IPv4", 00:13:07.474 "traddr": "10.0.0.3", 00:13:07.474 "trsvcid": "4420", 00:13:07.474 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:13:07.474 }, 00:13:07.474 "ctrlr_data": { 00:13:07.474 "cntlid": 1, 00:13:07.474 "vendor_id": "0x8086", 00:13:07.474 "model_number": "SPDK bdev Controller", 00:13:07.474 "serial_number": "SPDK0", 00:13:07.474 "firmware_revision": "25.01", 00:13:07.474 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:07.474 "oacs": { 00:13:07.474 "security": 0, 00:13:07.474 "format": 0, 00:13:07.474 "firmware": 0, 00:13:07.474 "ns_manage": 0 00:13:07.474 }, 00:13:07.474 "multi_ctrlr": true, 00:13:07.474 "ana_reporting": false 00:13:07.474 }, 00:13:07.474 "vs": { 00:13:07.474 "nvme_version": "1.3" 00:13:07.474 }, 00:13:07.474 "ns_data": { 00:13:07.474 "id": 1, 00:13:07.474 "can_share": true 00:13:07.474 } 00:13:07.474 } 00:13:07.474 ], 00:13:07.474 "mp_policy": "active_passive" 00:13:07.474 } 00:13:07.474 } 00:13:07.474 ] 00:13:07.733 14:17:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:07.733 14:17:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=66762 00:13:07.733 14:17:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:13:07.733 Running I/O for 10 seconds... 00:13:08.739 Latency(us) 00:13:08.739 [2024-11-06T14:17:36.375Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:08.740 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:08.740 Nvme0n1 : 1.00 7493.00 29.27 0.00 0.00 0.00 0.00 0.00 00:13:08.740 [2024-11-06T14:17:36.375Z] =================================================================================================================== 00:13:08.740 [2024-11-06T14:17:36.375Z] Total : 7493.00 29.27 0.00 0.00 0.00 0.00 0.00 00:13:08.740 00:13:09.676 14:17:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 6640b4ce-4c9e-4b01-b2eb-add5bb86b3dd 00:13:09.676 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:09.676 Nvme0n1 : 2.00 7874.00 30.76 0.00 0.00 0.00 0.00 0.00 00:13:09.676 [2024-11-06T14:17:37.311Z] =================================================================================================================== 00:13:09.676 [2024-11-06T14:17:37.311Z] Total : 7874.00 30.76 0.00 0.00 0.00 0.00 0.00 00:13:09.676 00:13:09.934 true 00:13:09.934 14:17:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6640b4ce-4c9e-4b01-b2eb-add5bb86b3dd 00:13:09.934 14:17:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:13:10.192 14:17:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:13:10.192 14:17:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:13:10.192 14:17:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 66762 00:13:10.759 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:10.760 Nvme0n1 : 3.00 7916.33 30.92 0.00 0.00 0.00 0.00 0.00 00:13:10.760 [2024-11-06T14:17:38.395Z] =================================================================================================================== 00:13:10.760 [2024-11-06T14:17:38.395Z] Total : 7916.33 30.92 0.00 0.00 0.00 0.00 0.00 00:13:10.760 00:13:11.696 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:11.696 Nvme0n1 : 4.00 7905.75 30.88 0.00 0.00 0.00 0.00 0.00 00:13:11.696 [2024-11-06T14:17:39.331Z] =================================================================================================================== 00:13:11.696 [2024-11-06T14:17:39.331Z] Total : 7905.75 30.88 0.00 0.00 0.00 0.00 0.00 00:13:11.696 00:13:12.631 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:12.631 Nvme0n1 : 5.00 7616.20 29.75 0.00 0.00 0.00 0.00 0.00 00:13:12.631 [2024-11-06T14:17:40.266Z] =================================================================================================================== 00:13:12.631 [2024-11-06T14:17:40.266Z] Total : 7616.20 29.75 0.00 0.00 0.00 0.00 0.00 00:13:12.631 00:13:14.008 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:14.008 Nvme0n1 : 6.00 6872.67 26.85 0.00 0.00 0.00 0.00 0.00 00:13:14.008 [2024-11-06T14:17:41.643Z] =================================================================================================================== 00:13:14.008 [2024-11-06T14:17:41.643Z] Total : 6872.67 26.85 0.00 0.00 0.00 0.00 0.00 00:13:14.008 00:13:14.944 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:14.944 Nvme0n1 : 7.00 6989.14 27.30 0.00 0.00 0.00 0.00 0.00 00:13:14.944 [2024-11-06T14:17:42.579Z] =================================================================================================================== 00:13:14.944 [2024-11-06T14:17:42.579Z] Total : 6989.14 27.30 0.00 0.00 0.00 0.00 0.00 00:13:14.944 00:13:15.878 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:15.878 Nvme0n1 : 8.00 7069.62 27.62 0.00 0.00 0.00 0.00 0.00 00:13:15.878 [2024-11-06T14:17:43.513Z] =================================================================================================================== 00:13:15.878 [2024-11-06T14:17:43.514Z] Total : 7069.62 27.62 0.00 0.00 0.00 0.00 0.00 00:13:15.879 00:13:16.812 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:16.812 Nvme0n1 : 9.00 7141.78 27.90 0.00 0.00 0.00 0.00 0.00 00:13:16.812 [2024-11-06T14:17:44.447Z] =================================================================================================================== 00:13:16.812 [2024-11-06T14:17:44.447Z] Total : 7141.78 27.90 0.00 0.00 0.00 0.00 0.00 00:13:16.812 00:13:17.745 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:17.745 Nvme0n1 : 10.00 7176.50 28.03 0.00 0.00 0.00 0.00 0.00 00:13:17.745 [2024-11-06T14:17:45.380Z] =================================================================================================================== 00:13:17.745 [2024-11-06T14:17:45.380Z] Total : 7176.50 28.03 0.00 0.00 0.00 0.00 0.00 00:13:17.745 00:13:17.745 00:13:17.746 Latency(us) 00:13:17.746 [2024-11-06T14:17:45.381Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:17.746 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:17.746 Nvme0n1 : 10.02 7174.97 28.03 0.00 0.00 17822.34 6185.12 771482.42 00:13:17.746 [2024-11-06T14:17:45.381Z] =================================================================================================================== 00:13:17.746 [2024-11-06T14:17:45.381Z] Total : 7174.97 28.03 0.00 0.00 17822.34 6185.12 771482.42 00:13:17.746 { 00:13:17.746 "results": [ 00:13:17.746 { 00:13:17.746 "job": "Nvme0n1", 00:13:17.746 "core_mask": "0x2", 00:13:17.746 "workload": "randwrite", 00:13:17.746 "status": "finished", 00:13:17.746 "queue_depth": 128, 00:13:17.746 "io_size": 4096, 00:13:17.746 "runtime": 10.016621, 00:13:17.746 "iops": 7174.974474925227, 00:13:17.746 "mibps": 28.027244042676667, 00:13:17.746 "io_failed": 0, 00:13:17.746 "io_timeout": 0, 00:13:17.746 "avg_latency_us": 17822.34248106816, 00:13:17.746 "min_latency_us": 6185.1244979919675, 00:13:17.746 "max_latency_us": 771482.4224899599 00:13:17.746 } 00:13:17.746 ], 00:13:17.746 "core_count": 1 00:13:17.746 } 00:13:17.746 14:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 66743 00:13:17.746 14:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' -z 66743 ']' 00:13:17.746 14:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # kill -0 66743 00:13:17.746 14:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # uname 00:13:17.746 14:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:17.746 14:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 66743 00:13:17.746 14:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:13:17.746 killing process with pid 66743 00:13:17.746 Received shutdown signal, test time was about 10.000000 seconds 00:13:17.746 00:13:17.746 Latency(us) 00:13:17.746 [2024-11-06T14:17:45.381Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:17.746 [2024-11-06T14:17:45.381Z] =================================================================================================================== 00:13:17.746 [2024-11-06T14:17:45.381Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:17.746 14:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:13:17.746 14:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # echo 'killing process with pid 66743' 00:13:17.746 14:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # kill 66743 00:13:17.746 14:17:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@976 -- # wait 66743 00:13:19.120 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:13:19.120 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:19.378 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6640b4ce-4c9e-4b01-b2eb-add5bb86b3dd 00:13:19.378 14:17:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:13:19.636 14:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:13:19.636 14:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:13:19.636 14:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 66389 00:13:19.636 14:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 66389 00:13:19.636 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 66389 Killed "${NVMF_APP[@]}" "$@" 00:13:19.636 14:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:13:19.636 14:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:13:19.636 14:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:19.636 14:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:19.636 14:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:19.636 14:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=66907 00:13:19.636 14:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 66907 00:13:19.636 14:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 66907 ']' 00:13:19.636 14:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:19.636 14:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:19.636 14:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:19.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:19.636 14:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:19.636 14:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:19.636 14:17:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:19.894 [2024-11-06 14:17:47.362657] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:13:19.894 [2024-11-06 14:17:47.362809] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:20.152 [2024-11-06 14:17:47.557914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:20.152 [2024-11-06 14:17:47.681626] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:20.152 [2024-11-06 14:17:47.681711] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:20.152 [2024-11-06 14:17:47.681728] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:20.152 [2024-11-06 14:17:47.681753] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:20.152 [2024-11-06 14:17:47.681767] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:20.152 [2024-11-06 14:17:47.683109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:20.409 [2024-11-06 14:17:47.903336] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:20.668 14:17:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:20.668 14:17:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:13:20.668 14:17:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:20.668 14:17:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:20.668 14:17:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:20.668 14:17:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:20.668 14:17:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:20.926 [2024-11-06 14:17:48.508115] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:13:20.926 [2024-11-06 14:17:48.508430] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:13:20.926 [2024-11-06 14:17:48.508708] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:13:21.183 14:17:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:13:21.183 14:17:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev e5784731-967b-425c-89ca-646db107035f 00:13:21.183 14:17:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=e5784731-967b-425c-89ca-646db107035f 00:13:21.183 14:17:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:21.183 14:17:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:13:21.183 14:17:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:21.183 14:17:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:21.183 14:17:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:21.183 14:17:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e5784731-967b-425c-89ca-646db107035f -t 2000 00:13:21.444 [ 00:13:21.444 { 00:13:21.444 "name": "e5784731-967b-425c-89ca-646db107035f", 00:13:21.444 "aliases": [ 00:13:21.444 "lvs/lvol" 00:13:21.444 ], 00:13:21.444 "product_name": "Logical Volume", 00:13:21.444 "block_size": 4096, 00:13:21.444 "num_blocks": 38912, 00:13:21.444 "uuid": "e5784731-967b-425c-89ca-646db107035f", 00:13:21.444 "assigned_rate_limits": { 00:13:21.444 "rw_ios_per_sec": 0, 00:13:21.444 "rw_mbytes_per_sec": 0, 00:13:21.444 "r_mbytes_per_sec": 0, 00:13:21.445 "w_mbytes_per_sec": 0 00:13:21.445 }, 00:13:21.445 "claimed": false, 00:13:21.445 "zoned": false, 00:13:21.445 "supported_io_types": { 00:13:21.445 "read": true, 00:13:21.445 "write": true, 00:13:21.445 "unmap": true, 00:13:21.445 "flush": false, 00:13:21.445 "reset": true, 00:13:21.445 "nvme_admin": false, 00:13:21.445 "nvme_io": false, 00:13:21.445 "nvme_io_md": false, 00:13:21.445 "write_zeroes": true, 00:13:21.445 "zcopy": false, 00:13:21.445 "get_zone_info": false, 00:13:21.445 "zone_management": false, 00:13:21.445 "zone_append": false, 00:13:21.445 "compare": false, 00:13:21.445 "compare_and_write": false, 00:13:21.445 "abort": false, 00:13:21.445 "seek_hole": true, 00:13:21.445 "seek_data": true, 00:13:21.445 "copy": false, 00:13:21.445 "nvme_iov_md": false 00:13:21.445 }, 00:13:21.445 "driver_specific": { 00:13:21.445 "lvol": { 00:13:21.445 "lvol_store_uuid": "6640b4ce-4c9e-4b01-b2eb-add5bb86b3dd", 00:13:21.445 "base_bdev": "aio_bdev", 00:13:21.445 "thin_provision": false, 00:13:21.445 "num_allocated_clusters": 38, 00:13:21.445 "snapshot": false, 00:13:21.445 "clone": false, 00:13:21.445 "esnap_clone": false 00:13:21.445 } 00:13:21.445 } 00:13:21.445 } 00:13:21.445 ] 00:13:21.445 14:17:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:13:21.445 14:17:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6640b4ce-4c9e-4b01-b2eb-add5bb86b3dd 00:13:21.445 14:17:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:13:21.703 14:17:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:13:21.703 14:17:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6640b4ce-4c9e-4b01-b2eb-add5bb86b3dd 00:13:21.703 14:17:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:13:21.961 14:17:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:13:21.961 14:17:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:22.220 [2024-11-06 14:17:49.671561] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:13:22.220 14:17:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6640b4ce-4c9e-4b01-b2eb-add5bb86b3dd 00:13:22.220 14:17:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:13:22.220 14:17:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6640b4ce-4c9e-4b01-b2eb-add5bb86b3dd 00:13:22.220 14:17:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:22.220 14:17:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:22.220 14:17:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:22.220 14:17:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:22.220 14:17:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:22.220 14:17:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:22.220 14:17:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:22.220 14:17:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:13:22.220 14:17:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6640b4ce-4c9e-4b01-b2eb-add5bb86b3dd 00:13:22.477 request: 00:13:22.477 { 00:13:22.477 "uuid": "6640b4ce-4c9e-4b01-b2eb-add5bb86b3dd", 00:13:22.477 "method": "bdev_lvol_get_lvstores", 00:13:22.477 "req_id": 1 00:13:22.477 } 00:13:22.477 Got JSON-RPC error response 00:13:22.477 response: 00:13:22.477 { 00:13:22.477 "code": -19, 00:13:22.477 "message": "No such device" 00:13:22.477 } 00:13:22.477 14:17:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:13:22.477 14:17:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:22.477 14:17:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:22.477 14:17:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:22.477 14:17:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:22.734 aio_bdev 00:13:22.734 14:17:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev e5784731-967b-425c-89ca-646db107035f 00:13:22.734 14:17:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=e5784731-967b-425c-89ca-646db107035f 00:13:22.734 14:17:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:22.734 14:17:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:13:22.734 14:17:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:22.734 14:17:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:22.734 14:17:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:22.992 14:17:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e5784731-967b-425c-89ca-646db107035f -t 2000 00:13:22.992 [ 00:13:22.992 { 00:13:22.992 "name": "e5784731-967b-425c-89ca-646db107035f", 00:13:22.992 "aliases": [ 00:13:22.992 "lvs/lvol" 00:13:22.992 ], 00:13:22.992 "product_name": "Logical Volume", 00:13:22.992 "block_size": 4096, 00:13:22.992 "num_blocks": 38912, 00:13:22.992 "uuid": "e5784731-967b-425c-89ca-646db107035f", 00:13:22.992 "assigned_rate_limits": { 00:13:22.992 "rw_ios_per_sec": 0, 00:13:22.992 "rw_mbytes_per_sec": 0, 00:13:22.992 "r_mbytes_per_sec": 0, 00:13:22.992 "w_mbytes_per_sec": 0 00:13:22.992 }, 00:13:22.992 "claimed": false, 00:13:22.992 "zoned": false, 00:13:22.992 "supported_io_types": { 00:13:22.992 "read": true, 00:13:22.992 "write": true, 00:13:22.992 "unmap": true, 00:13:22.992 "flush": false, 00:13:22.992 "reset": true, 00:13:22.992 "nvme_admin": false, 00:13:22.992 "nvme_io": false, 00:13:22.992 "nvme_io_md": false, 00:13:22.992 "write_zeroes": true, 00:13:22.992 "zcopy": false, 00:13:22.992 "get_zone_info": false, 00:13:22.992 "zone_management": false, 00:13:22.992 "zone_append": false, 00:13:22.993 "compare": false, 00:13:22.993 "compare_and_write": false, 00:13:22.993 "abort": false, 00:13:22.993 "seek_hole": true, 00:13:22.993 "seek_data": true, 00:13:22.993 "copy": false, 00:13:22.993 "nvme_iov_md": false 00:13:22.993 }, 00:13:22.993 "driver_specific": { 00:13:22.993 "lvol": { 00:13:22.993 "lvol_store_uuid": "6640b4ce-4c9e-4b01-b2eb-add5bb86b3dd", 00:13:22.993 "base_bdev": "aio_bdev", 00:13:22.993 "thin_provision": false, 00:13:22.993 "num_allocated_clusters": 38, 00:13:22.993 "snapshot": false, 00:13:22.993 "clone": false, 00:13:22.993 "esnap_clone": false 00:13:22.993 } 00:13:22.993 } 00:13:22.993 } 00:13:22.993 ] 00:13:22.993 14:17:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:13:22.993 14:17:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:13:22.993 14:17:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6640b4ce-4c9e-4b01-b2eb-add5bb86b3dd 00:13:23.281 14:17:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:13:23.281 14:17:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6640b4ce-4c9e-4b01-b2eb-add5bb86b3dd 00:13:23.281 14:17:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:13:23.538 14:17:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:13:23.538 14:17:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete e5784731-967b-425c-89ca-646db107035f 00:13:23.796 14:17:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6640b4ce-4c9e-4b01-b2eb-add5bb86b3dd 00:13:24.053 14:17:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:24.311 14:17:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:13:24.877 ************************************ 00:13:24.877 END TEST lvs_grow_dirty 00:13:24.877 ************************************ 00:13:24.877 00:13:24.877 real 0m21.045s 00:13:24.877 user 0m42.808s 00:13:24.877 sys 0m8.189s 00:13:24.877 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:24.877 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:24.877 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:13:24.877 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # type=--id 00:13:24.877 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # id=0 00:13:24.877 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:13:24.877 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:13:24.877 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:13:24.877 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:13:24.877 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # for n in $shm_files 00:13:24.877 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:13:24.877 nvmf_trace.0 00:13:24.877 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # return 0 00:13:24.877 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:13:24.877 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:24.877 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:13:25.135 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:25.135 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:13:25.135 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:25.135 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:25.135 rmmod nvme_tcp 00:13:25.135 rmmod nvme_fabrics 00:13:25.135 rmmod nvme_keyring 00:13:25.392 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:25.392 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:13:25.392 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:13:25.392 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 66907 ']' 00:13:25.392 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 66907 00:13:25.392 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' -z 66907 ']' 00:13:25.392 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # kill -0 66907 00:13:25.392 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # uname 00:13:25.392 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:25.392 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 66907 00:13:25.392 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:25.392 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:25.392 killing process with pid 66907 00:13:25.392 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # echo 'killing process with pid 66907' 00:13:25.392 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # kill 66907 00:13:25.392 14:17:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@976 -- # wait 66907 00:13:26.765 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:26.765 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:26.765 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:26.765 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:13:26.765 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:13:26.765 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:26.765 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:13:26.765 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:26.765 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:26.765 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:26.765 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:26.765 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:26.765 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:26.765 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:26.765 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:26.765 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:26.765 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:26.765 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:26.765 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:26.765 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:26.765 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:26.765 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:26.765 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:26.765 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:26.765 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:26.765 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:26.765 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:13:26.765 00:13:26.765 real 0m43.709s 00:13:26.765 user 1m6.771s 00:13:26.765 sys 0m12.598s 00:13:26.765 ************************************ 00:13:26.765 END TEST nvmf_lvs_grow 00:13:26.765 ************************************ 00:13:26.765 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:26.765 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:27.023 14:17:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:13:27.023 14:17:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:27.023 14:17:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:27.023 14:17:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:27.023 ************************************ 00:13:27.023 START TEST nvmf_bdev_io_wait 00:13:27.023 ************************************ 00:13:27.023 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:13:27.023 * Looking for test storage... 00:13:27.023 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:27.023 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:27.023 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:13:27.023 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:27.281 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:27.281 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:27.281 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:27.281 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:27.281 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:13:27.281 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:13:27.281 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:13:27.281 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:13:27.281 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:13:27.281 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:13:27.281 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:13:27.281 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:27.281 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:13:27.281 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:13:27.281 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:27.281 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:27.281 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:13:27.281 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:13:27.281 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:27.281 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:13:27.281 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:13:27.281 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:13:27.281 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:13:27.282 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:27.282 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:13:27.282 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:13:27.282 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:27.282 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:27.282 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:13:27.282 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:27.282 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:27.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:27.282 --rc genhtml_branch_coverage=1 00:13:27.282 --rc genhtml_function_coverage=1 00:13:27.282 --rc genhtml_legend=1 00:13:27.282 --rc geninfo_all_blocks=1 00:13:27.282 --rc geninfo_unexecuted_blocks=1 00:13:27.282 00:13:27.282 ' 00:13:27.282 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:27.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:27.282 --rc genhtml_branch_coverage=1 00:13:27.282 --rc genhtml_function_coverage=1 00:13:27.282 --rc genhtml_legend=1 00:13:27.282 --rc geninfo_all_blocks=1 00:13:27.282 --rc geninfo_unexecuted_blocks=1 00:13:27.282 00:13:27.282 ' 00:13:27.282 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:27.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:27.282 --rc genhtml_branch_coverage=1 00:13:27.282 --rc genhtml_function_coverage=1 00:13:27.282 --rc genhtml_legend=1 00:13:27.282 --rc geninfo_all_blocks=1 00:13:27.282 --rc geninfo_unexecuted_blocks=1 00:13:27.282 00:13:27.282 ' 00:13:27.282 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:27.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:27.282 --rc genhtml_branch_coverage=1 00:13:27.282 --rc genhtml_function_coverage=1 00:13:27.282 --rc genhtml_legend=1 00:13:27.282 --rc geninfo_all_blocks=1 00:13:27.282 --rc geninfo_unexecuted_blocks=1 00:13:27.282 00:13:27.282 ' 00:13:27.282 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:27.282 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:13:27.282 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:27.282 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:27.282 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:27.282 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:27.282 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:27.282 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:27.282 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:27.282 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:27.282 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:27.282 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:27.282 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:13:27.282 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:13:27.282 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:27.282 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:27.282 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:27.282 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:27.282 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:27.282 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:13:27.282 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:27.282 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:27.282 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:27.282 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.282 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.282 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.282 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:13:27.282 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.282 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:13:27.282 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:27.282 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:27.282 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:27.282 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:27.282 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:27.282 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:27.282 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:27.282 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:27.282 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:27.282 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:27.282 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:27.282 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:27.282 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:13:27.282 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:27.282 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:27.282 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:27.282 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:27.282 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:27.282 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:27.282 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:27.282 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:27.282 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:27.282 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:27.282 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:27.282 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:27.282 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:27.282 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:27.282 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:27.282 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:27.282 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:27.282 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:27.282 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:27.282 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:27.282 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:27.282 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:27.282 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:27.282 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:27.282 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:27.283 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:27.283 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:27.283 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:27.283 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:27.283 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:27.283 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:27.283 Cannot find device "nvmf_init_br" 00:13:27.283 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:13:27.283 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:27.283 Cannot find device "nvmf_init_br2" 00:13:27.283 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:13:27.283 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:27.283 Cannot find device "nvmf_tgt_br" 00:13:27.283 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:13:27.283 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:27.283 Cannot find device "nvmf_tgt_br2" 00:13:27.283 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:13:27.283 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:27.283 Cannot find device "nvmf_init_br" 00:13:27.283 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:13:27.283 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:27.283 Cannot find device "nvmf_init_br2" 00:13:27.283 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:13:27.283 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:27.283 Cannot find device "nvmf_tgt_br" 00:13:27.283 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:13:27.283 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:27.540 Cannot find device "nvmf_tgt_br2" 00:13:27.540 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:13:27.540 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:27.540 Cannot find device "nvmf_br" 00:13:27.540 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:13:27.541 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:27.541 Cannot find device "nvmf_init_if" 00:13:27.541 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:13:27.541 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:27.541 Cannot find device "nvmf_init_if2" 00:13:27.541 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:13:27.541 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:27.541 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:27.541 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:13:27.541 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:27.541 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:27.541 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:13:27.541 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:27.541 14:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:27.541 14:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:27.541 14:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:27.541 14:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:27.541 14:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:27.541 14:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:27.541 14:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:27.541 14:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:27.541 14:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:27.541 14:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:27.541 14:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:27.541 14:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:27.541 14:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:27.541 14:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:27.541 14:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:27.541 14:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:27.541 14:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:27.541 14:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:27.541 14:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:27.541 14:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:27.541 14:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:27.541 14:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:27.541 14:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:27.541 14:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:27.797 14:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:27.797 14:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:27.797 14:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:27.797 14:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:27.797 14:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:27.797 14:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:27.797 14:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:27.797 14:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:27.798 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:27.798 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.114 ms 00:13:27.798 00:13:27.798 --- 10.0.0.3 ping statistics --- 00:13:27.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:27.798 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:13:27.798 14:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:27.798 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:27.798 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.086 ms 00:13:27.798 00:13:27.798 --- 10.0.0.4 ping statistics --- 00:13:27.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:27.798 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:13:27.798 14:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:27.798 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:27.798 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:13:27.798 00:13:27.798 --- 10.0.0.1 ping statistics --- 00:13:27.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:27.798 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:13:27.798 14:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:27.798 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:27.798 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 00:13:27.798 00:13:27.798 --- 10.0.0.2 ping statistics --- 00:13:27.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:27.798 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:13:27.798 14:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:27.798 14:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 00:13:27.798 14:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:27.798 14:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:27.798 14:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:27.798 14:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:27.798 14:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:27.798 14:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:27.798 14:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:27.798 14:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:13:27.798 14:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:27.798 14:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:27.798 14:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:27.798 14:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=67291 00:13:27.798 14:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:13:27.798 14:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 67291 00:13:27.798 14:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # '[' -z 67291 ']' 00:13:27.798 14:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:27.798 14:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:27.798 14:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:27.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:27.798 14:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:27.798 14:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:28.056 [2024-11-06 14:17:55.447310] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:13:28.056 [2024-11-06 14:17:55.447466] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:28.056 [2024-11-06 14:17:55.638372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:28.313 [2024-11-06 14:17:55.794974] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:28.313 [2024-11-06 14:17:55.795220] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:28.313 [2024-11-06 14:17:55.795418] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:28.313 [2024-11-06 14:17:55.795471] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:28.313 [2024-11-06 14:17:55.795557] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:28.313 [2024-11-06 14:17:55.798140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:28.313 [2024-11-06 14:17:55.798358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:28.313 [2024-11-06 14:17:55.798443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:28.313 [2024-11-06 14:17:55.798476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:28.879 14:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:28.879 14:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@866 -- # return 0 00:13:28.879 14:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:28.879 14:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:28.879 14:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:28.879 14:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:28.879 14:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:13:28.879 14:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.879 14:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:28.879 14:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.879 14:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:13:28.879 14:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.879 14:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:29.138 [2024-11-06 14:17:56.582211] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:29.138 14:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.138 14:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:29.138 14:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.138 14:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:29.138 [2024-11-06 14:17:56.607485] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:29.138 14:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.138 14:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:29.138 14:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.138 14:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:29.138 Malloc0 00:13:29.138 14:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.138 14:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:29.138 14:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.138 14:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:29.138 14:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.138 14:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:29.138 14:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.138 14:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:29.138 14:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.138 14:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:13:29.138 14:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.138 14:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:29.138 [2024-11-06 14:17:56.739479] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:29.138 14:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.138 14:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=67326 00:13:29.138 14:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:13:29.138 14:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:13:29.138 14:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:13:29.138 14:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=67328 00:13:29.138 14:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:13:29.138 14:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:13:29.138 14:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:13:29.138 { 00:13:29.138 "params": { 00:13:29.138 "name": "Nvme$subsystem", 00:13:29.138 "trtype": "$TEST_TRANSPORT", 00:13:29.138 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:29.138 "adrfam": "ipv4", 00:13:29.138 "trsvcid": "$NVMF_PORT", 00:13:29.138 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:29.138 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:29.138 "hdgst": ${hdgst:-false}, 00:13:29.138 "ddgst": ${ddgst:-false} 00:13:29.138 }, 00:13:29.138 "method": "bdev_nvme_attach_controller" 00:13:29.138 } 00:13:29.138 EOF 00:13:29.138 )") 00:13:29.138 14:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:13:29.138 14:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:13:29.138 14:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=67330 00:13:29.138 14:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:13:29.138 14:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:13:29.138 14:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:13:29.138 14:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:13:29.138 14:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:13:29.138 { 00:13:29.138 "params": { 00:13:29.138 "name": "Nvme$subsystem", 00:13:29.138 "trtype": "$TEST_TRANSPORT", 00:13:29.138 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:29.138 "adrfam": "ipv4", 00:13:29.138 "trsvcid": "$NVMF_PORT", 00:13:29.138 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:29.138 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:29.138 "hdgst": ${hdgst:-false}, 00:13:29.138 "ddgst": ${ddgst:-false} 00:13:29.138 }, 00:13:29.138 "method": "bdev_nvme_attach_controller" 00:13:29.138 } 00:13:29.138 EOF 00:13:29.138 )") 00:13:29.138 14:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=67333 00:13:29.138 14:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:13:29.138 14:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:13:29.138 14:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:13:29.138 14:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:13:29.138 14:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:13:29.138 14:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:13:29.138 { 00:13:29.138 "params": { 00:13:29.138 "name": "Nvme$subsystem", 00:13:29.138 "trtype": "$TEST_TRANSPORT", 00:13:29.139 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:29.139 "adrfam": "ipv4", 00:13:29.139 "trsvcid": "$NVMF_PORT", 00:13:29.139 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:29.139 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:29.139 "hdgst": ${hdgst:-false}, 00:13:29.139 "ddgst": ${ddgst:-false} 00:13:29.139 }, 00:13:29.139 "method": "bdev_nvme_attach_controller" 00:13:29.139 } 00:13:29.139 EOF 00:13:29.139 )") 00:13:29.139 14:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:13:29.139 14:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:13:29.139 14:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:13:29.139 14:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:13:29.139 14:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:13:29.139 14:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:13:29.139 14:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:13:29.139 14:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:13:29.139 14:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:13:29.139 { 00:13:29.139 "params": { 00:13:29.139 "name": "Nvme$subsystem", 00:13:29.139 "trtype": "$TEST_TRANSPORT", 00:13:29.139 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:29.139 "adrfam": "ipv4", 00:13:29.139 "trsvcid": "$NVMF_PORT", 00:13:29.139 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:29.139 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:29.139 "hdgst": ${hdgst:-false}, 00:13:29.139 "ddgst": ${ddgst:-false} 00:13:29.139 }, 00:13:29.139 "method": "bdev_nvme_attach_controller" 00:13:29.139 } 00:13:29.139 EOF 00:13:29.139 )") 00:13:29.139 14:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:13:29.139 14:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:13:29.139 14:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:13:29.139 14:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:13:29.139 14:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:13:29.139 "params": { 00:13:29.139 "name": "Nvme1", 00:13:29.139 "trtype": "tcp", 00:13:29.139 "traddr": "10.0.0.3", 00:13:29.139 "adrfam": "ipv4", 00:13:29.139 "trsvcid": "4420", 00:13:29.139 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:29.139 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:29.139 "hdgst": false, 00:13:29.139 "ddgst": false 00:13:29.139 }, 00:13:29.139 "method": "bdev_nvme_attach_controller" 00:13:29.139 }' 00:13:29.139 14:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:13:29.139 14:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:13:29.139 14:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:13:29.397 14:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:13:29.397 "params": { 00:13:29.397 "name": "Nvme1", 00:13:29.397 "trtype": "tcp", 00:13:29.397 "traddr": "10.0.0.3", 00:13:29.397 "adrfam": "ipv4", 00:13:29.397 "trsvcid": "4420", 00:13:29.397 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:29.397 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:29.397 "hdgst": false, 00:13:29.397 "ddgst": false 00:13:29.397 }, 00:13:29.397 "method": "bdev_nvme_attach_controller" 00:13:29.398 }' 00:13:29.398 14:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:13:29.398 14:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:13:29.398 "params": { 00:13:29.398 "name": "Nvme1", 00:13:29.398 "trtype": "tcp", 00:13:29.398 "traddr": "10.0.0.3", 00:13:29.398 "adrfam": "ipv4", 00:13:29.398 "trsvcid": "4420", 00:13:29.398 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:29.398 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:29.398 "hdgst": false, 00:13:29.398 "ddgst": false 00:13:29.398 }, 00:13:29.398 "method": "bdev_nvme_attach_controller" 00:13:29.398 }' 00:13:29.398 14:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:13:29.398 14:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:13:29.398 "params": { 00:13:29.398 "name": "Nvme1", 00:13:29.398 "trtype": "tcp", 00:13:29.398 "traddr": "10.0.0.3", 00:13:29.398 "adrfam": "ipv4", 00:13:29.398 "trsvcid": "4420", 00:13:29.398 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:29.398 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:29.398 "hdgst": false, 00:13:29.398 "ddgst": false 00:13:29.398 }, 00:13:29.398 "method": "bdev_nvme_attach_controller" 00:13:29.398 }' 00:13:29.398 14:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 67326 00:13:29.398 [2024-11-06 14:17:56.862657] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:13:29.398 [2024-11-06 14:17:56.863021] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:13:29.398 [2024-11-06 14:17:56.865455] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:13:29.398 [2024-11-06 14:17:56.865573] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:13:29.398 [2024-11-06 14:17:56.870143] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:13:29.398 [2024-11-06 14:17:56.870262] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:13:29.398 [2024-11-06 14:17:56.879266] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:13:29.398 [2024-11-06 14:17:56.879521] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:13:29.655 [2024-11-06 14:17:57.110976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:29.655 [2024-11-06 14:17:57.172707] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:29.655 [2024-11-06 14:17:57.236135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:13:29.914 [2024-11-06 14:17:57.295783] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:29.914 [2024-11-06 14:17:57.310382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:13:29.914 [2024-11-06 14:17:57.364534] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:29.914 [2024-11-06 14:17:57.423289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:13:29.914 [2024-11-06 14:17:57.442462] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:29.914 [2024-11-06 14:17:57.486562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:13:29.914 [2024-11-06 14:17:57.514531] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:30.171 [2024-11-06 14:17:57.612928] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:30.171 Running I/O for 1 seconds... 00:13:30.171 [2024-11-06 14:17:57.713137] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:30.171 Running I/O for 1 seconds... 00:13:30.429 Running I/O for 1 seconds... 00:13:30.429 Running I/O for 1 seconds... 00:13:31.362 4457.00 IOPS, 17.41 MiB/s 00:13:31.362 Latency(us) 00:13:31.362 [2024-11-06T14:17:58.997Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:31.362 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:13:31.362 Nvme1n1 : 1.04 4436.99 17.33 0.00 0.00 28369.11 8211.74 69483.95 00:13:31.362 [2024-11-06T14:17:58.997Z] =================================================================================================================== 00:13:31.362 [2024-11-06T14:17:58.997Z] Total : 4436.99 17.33 0.00 0.00 28369.11 8211.74 69483.95 00:13:31.362 178216.00 IOPS, 696.16 MiB/s 00:13:31.362 Latency(us) 00:13:31.362 [2024-11-06T14:17:58.997Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:31.362 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:13:31.362 Nvme1n1 : 1.00 177822.65 694.62 0.00 0.00 716.19 427.69 3658.44 00:13:31.362 [2024-11-06T14:17:58.997Z] =================================================================================================================== 00:13:31.362 [2024-11-06T14:17:58.997Z] Total : 177822.65 694.62 0.00 0.00 716.19 427.69 3658.44 00:13:31.362 5438.00 IOPS, 21.24 MiB/s 00:13:31.362 Latency(us) 00:13:31.362 [2024-11-06T14:17:58.997Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:31.362 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:13:31.362 Nvme1n1 : 1.02 5482.84 21.42 0.00 0.00 23161.70 14423.18 38110.89 00:13:31.362 [2024-11-06T14:17:58.997Z] =================================================================================================================== 00:13:31.362 [2024-11-06T14:17:58.997Z] Total : 5482.84 21.42 0.00 0.00 23161.70 14423.18 38110.89 00:13:31.362 3788.00 IOPS, 14.80 MiB/s 00:13:31.362 Latency(us) 00:13:31.362 [2024-11-06T14:17:58.997Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:31.362 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:13:31.362 Nvme1n1 : 1.01 3903.22 15.25 0.00 0.00 32657.33 7422.15 76221.79 00:13:31.362 [2024-11-06T14:17:58.997Z] =================================================================================================================== 00:13:31.362 [2024-11-06T14:17:58.997Z] Total : 3903.22 15.25 0.00 0.00 32657.33 7422.15 76221.79 00:13:32.296 14:17:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 67328 00:13:32.296 14:17:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 67330 00:13:32.296 14:17:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 67333 00:13:32.296 14:17:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:32.296 14:17:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.296 14:17:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:32.296 14:17:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.296 14:17:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:13:32.296 14:17:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:13:32.296 14:17:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:32.296 14:17:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:13:32.296 14:17:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:32.296 14:17:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:13:32.296 14:17:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:32.296 14:17:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:32.296 rmmod nvme_tcp 00:13:32.296 rmmod nvme_fabrics 00:13:32.296 rmmod nvme_keyring 00:13:32.296 14:17:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:32.296 14:17:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:13:32.296 14:17:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:13:32.296 14:17:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 67291 ']' 00:13:32.296 14:17:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 67291 00:13:32.296 14:17:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' -z 67291 ']' 00:13:32.296 14:17:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # kill -0 67291 00:13:32.296 14:17:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # uname 00:13:32.296 14:17:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:32.296 14:17:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67291 00:13:32.296 14:17:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:32.296 14:17:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:32.296 killing process with pid 67291 00:13:32.296 14:17:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67291' 00:13:32.296 14:17:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # kill 67291 00:13:32.296 14:17:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@976 -- # wait 67291 00:13:33.668 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:33.668 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:33.668 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:33.668 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:13:33.668 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:13:33.668 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:33.668 14:18:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:13:33.668 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:33.668 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:33.668 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:33.668 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:33.668 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:33.668 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:33.668 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:33.668 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:33.668 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:33.668 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:33.668 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:33.668 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:33.668 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:33.668 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:33.668 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:33.668 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:33.668 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:33.668 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:33.668 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:33.956 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:13:33.956 00:13:33.956 real 0m6.859s 00:13:33.956 user 0m27.653s 00:13:33.956 sys 0m3.471s 00:13:33.956 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:33.956 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:33.956 ************************************ 00:13:33.956 END TEST nvmf_bdev_io_wait 00:13:33.956 ************************************ 00:13:33.956 14:18:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:13:33.956 14:18:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:33.956 14:18:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:33.956 14:18:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:33.956 ************************************ 00:13:33.956 START TEST nvmf_queue_depth 00:13:33.956 ************************************ 00:13:33.956 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:13:33.956 * Looking for test storage... 00:13:33.956 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:33.956 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:33.956 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:13:33.956 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:34.216 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:34.216 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:34.216 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:34.216 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:34.216 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:13:34.216 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:13:34.216 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:13:34.216 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:13:34.216 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:13:34.216 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:13:34.216 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:13:34.216 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:34.216 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:13:34.216 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:13:34.216 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:34.216 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:34.216 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:13:34.216 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:13:34.216 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:34.216 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:13:34.216 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:13:34.216 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:13:34.216 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:13:34.216 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:34.216 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:13:34.216 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:13:34.216 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:34.216 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:34.216 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:13:34.216 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:34.216 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:34.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:34.216 --rc genhtml_branch_coverage=1 00:13:34.216 --rc genhtml_function_coverage=1 00:13:34.216 --rc genhtml_legend=1 00:13:34.216 --rc geninfo_all_blocks=1 00:13:34.216 --rc geninfo_unexecuted_blocks=1 00:13:34.216 00:13:34.216 ' 00:13:34.216 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:34.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:34.216 --rc genhtml_branch_coverage=1 00:13:34.216 --rc genhtml_function_coverage=1 00:13:34.216 --rc genhtml_legend=1 00:13:34.216 --rc geninfo_all_blocks=1 00:13:34.216 --rc geninfo_unexecuted_blocks=1 00:13:34.216 00:13:34.216 ' 00:13:34.216 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:34.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:34.216 --rc genhtml_branch_coverage=1 00:13:34.216 --rc genhtml_function_coverage=1 00:13:34.216 --rc genhtml_legend=1 00:13:34.216 --rc geninfo_all_blocks=1 00:13:34.216 --rc geninfo_unexecuted_blocks=1 00:13:34.216 00:13:34.216 ' 00:13:34.216 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:34.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:34.216 --rc genhtml_branch_coverage=1 00:13:34.216 --rc genhtml_function_coverage=1 00:13:34.216 --rc genhtml_legend=1 00:13:34.216 --rc geninfo_all_blocks=1 00:13:34.216 --rc geninfo_unexecuted_blocks=1 00:13:34.216 00:13:34.216 ' 00:13:34.216 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:34.216 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:13:34.216 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:34.216 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:34.216 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:34.216 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:34.216 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:34.216 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:34.216 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:34.216 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:34.216 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:34.216 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:34.216 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:13:34.216 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:13:34.216 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:34.216 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:34.216 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:34.216 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:34.216 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:34.216 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:13:34.216 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:34.216 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:34.216 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:34.216 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.216 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.216 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.216 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:13:34.216 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.216 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:13:34.216 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:34.217 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:34.217 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:34.217 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:34.217 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:34.217 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:34.217 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:34.217 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:34.217 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:34.217 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:34.217 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:13:34.217 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:13:34.217 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:34.217 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:13:34.217 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:34.217 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:34.217 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:34.217 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:34.217 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:34.217 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:34.217 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:34.217 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:34.217 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:34.217 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:34.217 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:34.217 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:34.217 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:34.217 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:34.217 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:34.217 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:34.217 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:34.217 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:34.217 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:34.217 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:34.217 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:34.217 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:34.217 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:34.217 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:34.217 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:34.217 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:34.217 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:34.217 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:34.217 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:34.217 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:34.217 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:34.217 Cannot find device "nvmf_init_br" 00:13:34.217 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:13:34.217 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:34.217 Cannot find device "nvmf_init_br2" 00:13:34.217 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:13:34.217 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:34.217 Cannot find device "nvmf_tgt_br" 00:13:34.217 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:13:34.217 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:34.217 Cannot find device "nvmf_tgt_br2" 00:13:34.217 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:13:34.217 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:34.217 Cannot find device "nvmf_init_br" 00:13:34.217 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:13:34.217 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:34.217 Cannot find device "nvmf_init_br2" 00:13:34.217 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:13:34.217 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:34.217 Cannot find device "nvmf_tgt_br" 00:13:34.217 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:13:34.217 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:34.217 Cannot find device "nvmf_tgt_br2" 00:13:34.217 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:13:34.217 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:34.476 Cannot find device "nvmf_br" 00:13:34.476 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:13:34.476 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:34.476 Cannot find device "nvmf_init_if" 00:13:34.476 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:13:34.476 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:34.476 Cannot find device "nvmf_init_if2" 00:13:34.476 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:13:34.476 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:34.476 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:34.476 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:13:34.476 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:34.476 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:34.476 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:13:34.476 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:34.476 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:34.476 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:34.476 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:34.476 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:34.476 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:34.476 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:34.476 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:34.476 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:34.476 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:34.476 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:34.476 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:34.476 14:18:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:34.476 14:18:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:34.476 14:18:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:34.477 14:18:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:34.477 14:18:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:34.477 14:18:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:34.477 14:18:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:34.477 14:18:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:34.477 14:18:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:34.477 14:18:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:34.477 14:18:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:34.477 14:18:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:34.477 14:18:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:34.477 14:18:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:34.735 14:18:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:34.735 14:18:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:34.735 14:18:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:34.735 14:18:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:34.735 14:18:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:34.735 14:18:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:34.735 14:18:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:34.735 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:34.735 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.112 ms 00:13:34.735 00:13:34.735 --- 10.0.0.3 ping statistics --- 00:13:34.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:34.735 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:13:34.735 14:18:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:34.735 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:34.735 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.115 ms 00:13:34.735 00:13:34.735 --- 10.0.0.4 ping statistics --- 00:13:34.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:34.735 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:13:34.735 14:18:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:34.735 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:34.735 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.062 ms 00:13:34.735 00:13:34.735 --- 10.0.0.1 ping statistics --- 00:13:34.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:34.735 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:13:34.735 14:18:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:34.735 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:34.735 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:13:34.735 00:13:34.735 --- 10.0.0.2 ping statistics --- 00:13:34.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:34.735 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:13:34.735 14:18:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:34.735 14:18:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 00:13:34.735 14:18:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:34.735 14:18:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:34.735 14:18:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:34.735 14:18:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:34.736 14:18:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:34.736 14:18:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:34.736 14:18:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:34.736 14:18:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:13:34.736 14:18:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:34.736 14:18:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:34.736 14:18:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:34.736 14:18:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=67659 00:13:34.736 14:18:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:34.736 14:18:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 67659 00:13:34.736 14:18:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 67659 ']' 00:13:34.736 14:18:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:34.736 14:18:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:34.736 14:18:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:34.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:34.736 14:18:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:34.736 14:18:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:34.736 [2024-11-06 14:18:02.348714] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:13:34.736 [2024-11-06 14:18:02.348873] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:34.995 [2024-11-06 14:18:02.542234] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:35.254 [2024-11-06 14:18:02.689694] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:35.254 [2024-11-06 14:18:02.689971] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:35.254 [2024-11-06 14:18:02.690000] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:35.254 [2024-11-06 14:18:02.690023] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:35.254 [2024-11-06 14:18:02.690036] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:35.254 [2024-11-06 14:18:02.691380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:35.512 [2024-11-06 14:18:02.939937] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:35.771 14:18:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:35.771 14:18:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:13:35.771 14:18:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:35.771 14:18:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:35.771 14:18:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:35.771 14:18:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:35.771 14:18:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:35.771 14:18:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.771 14:18:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:35.771 [2024-11-06 14:18:03.228540] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:35.771 14:18:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.771 14:18:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:35.771 14:18:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.771 14:18:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:35.771 Malloc0 00:13:35.771 14:18:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.771 14:18:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:35.771 14:18:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.771 14:18:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:35.771 14:18:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.771 14:18:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:35.771 14:18:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.771 14:18:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:35.771 14:18:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.771 14:18:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:13:35.771 14:18:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.771 14:18:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:35.771 [2024-11-06 14:18:03.367147] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:35.771 14:18:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.771 14:18:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=67692 00:13:35.771 14:18:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:13:35.771 14:18:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:35.771 14:18:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 67692 /var/tmp/bdevperf.sock 00:13:35.771 14:18:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 67692 ']' 00:13:35.771 14:18:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:35.771 14:18:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:35.771 14:18:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:35.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:35.771 14:18:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:35.771 14:18:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:36.030 [2024-11-06 14:18:03.481976] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:13:36.030 [2024-11-06 14:18:03.482372] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67692 ] 00:13:36.289 [2024-11-06 14:18:03.673018] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:36.289 [2024-11-06 14:18:03.818622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:36.572 [2024-11-06 14:18:04.047488] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:36.837 14:18:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:36.837 14:18:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:13:36.838 14:18:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:13:36.838 14:18:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.838 14:18:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:36.838 NVMe0n1 00:13:36.838 14:18:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.838 14:18:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:37.096 Running I/O for 10 seconds... 00:13:38.968 6965.00 IOPS, 27.21 MiB/s [2024-11-06T14:18:07.979Z] 7358.00 IOPS, 28.74 MiB/s [2024-11-06T14:18:08.916Z] 7514.00 IOPS, 29.35 MiB/s [2024-11-06T14:18:09.853Z] 7665.50 IOPS, 29.94 MiB/s [2024-11-06T14:18:10.789Z] 7787.40 IOPS, 30.42 MiB/s [2024-11-06T14:18:11.725Z] 7866.83 IOPS, 30.73 MiB/s [2024-11-06T14:18:12.661Z] 8004.86 IOPS, 31.27 MiB/s [2024-11-06T14:18:13.650Z] 8078.50 IOPS, 31.56 MiB/s [2024-11-06T14:18:14.585Z] 8109.00 IOPS, 31.68 MiB/s [2024-11-06T14:18:14.843Z] 8188.10 IOPS, 31.98 MiB/s 00:13:47.208 Latency(us) 00:13:47.208 [2024-11-06T14:18:14.843Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:47.208 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:13:47.208 Verification LBA range: start 0x0 length 0x4000 00:13:47.208 NVMe0n1 : 10.10 8211.32 32.08 0.00 0.00 124124.10 21266.30 88855.24 00:13:47.208 [2024-11-06T14:18:14.843Z] =================================================================================================================== 00:13:47.208 [2024-11-06T14:18:14.843Z] Total : 8211.32 32.08 0.00 0.00 124124.10 21266.30 88855.24 00:13:47.208 { 00:13:47.208 "results": [ 00:13:47.208 { 00:13:47.208 "job": "NVMe0n1", 00:13:47.208 "core_mask": "0x1", 00:13:47.208 "workload": "verify", 00:13:47.208 "status": "finished", 00:13:47.208 "verify_range": { 00:13:47.208 "start": 0, 00:13:47.208 "length": 16384 00:13:47.208 }, 00:13:47.208 "queue_depth": 1024, 00:13:47.208 "io_size": 4096, 00:13:47.208 "runtime": 10.095455, 00:13:47.208 "iops": 8211.318855861375, 00:13:47.208 "mibps": 32.075464280708495, 00:13:47.208 "io_failed": 0, 00:13:47.208 "io_timeout": 0, 00:13:47.208 "avg_latency_us": 124124.10329202742, 00:13:47.208 "min_latency_us": 21266.300401606426, 00:13:47.208 "max_latency_us": 88855.23534136546 00:13:47.208 } 00:13:47.208 ], 00:13:47.208 "core_count": 1 00:13:47.208 } 00:13:47.208 14:18:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 67692 00:13:47.208 14:18:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 67692 ']' 00:13:47.208 14:18:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 67692 00:13:47.208 14:18:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:13:47.208 14:18:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:47.208 14:18:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67692 00:13:47.208 14:18:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:47.208 14:18:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:47.208 killing process with pid 67692 00:13:47.208 Received shutdown signal, test time was about 10.000000 seconds 00:13:47.208 00:13:47.208 Latency(us) 00:13:47.208 [2024-11-06T14:18:14.843Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:47.208 [2024-11-06T14:18:14.843Z] =================================================================================================================== 00:13:47.208 [2024-11-06T14:18:14.843Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:47.208 14:18:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67692' 00:13:47.208 14:18:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 67692 00:13:47.208 14:18:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 67692 00:13:48.143 14:18:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:48.143 14:18:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:13:48.143 14:18:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:48.143 14:18:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:13:48.143 14:18:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:48.143 14:18:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:13:48.143 14:18:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:48.143 14:18:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:48.143 rmmod nvme_tcp 00:13:48.143 rmmod nvme_fabrics 00:13:48.409 rmmod nvme_keyring 00:13:48.409 14:18:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:48.409 14:18:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:13:48.409 14:18:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:13:48.409 14:18:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 67659 ']' 00:13:48.409 14:18:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 67659 00:13:48.409 14:18:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 67659 ']' 00:13:48.409 14:18:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 67659 00:13:48.409 14:18:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:13:48.409 14:18:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:48.409 14:18:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67659 00:13:48.409 14:18:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:13:48.410 14:18:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:13:48.410 14:18:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67659' 00:13:48.410 killing process with pid 67659 00:13:48.410 14:18:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 67659 00:13:48.410 14:18:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 67659 00:13:49.807 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:49.807 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:49.807 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:49.807 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:13:49.807 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:13:49.807 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:49.807 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:13:49.807 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:49.807 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:49.807 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:49.807 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:49.807 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:49.807 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:49.807 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:49.807 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:49.807 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:49.807 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:49.807 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:50.066 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:50.066 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:50.066 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:50.066 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:50.066 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:50.066 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:50.066 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:50.066 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:50.066 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:13:50.066 00:13:50.066 real 0m16.201s 00:13:50.066 user 0m25.419s 00:13:50.066 sys 0m3.428s 00:13:50.066 ************************************ 00:13:50.066 END TEST nvmf_queue_depth 00:13:50.066 ************************************ 00:13:50.066 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:50.066 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:50.066 14:18:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:13:50.066 14:18:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:50.066 14:18:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:50.066 14:18:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:50.066 ************************************ 00:13:50.066 START TEST nvmf_target_multipath 00:13:50.066 ************************************ 00:13:50.066 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:13:50.325 * Looking for test storage... 00:13:50.325 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:50.325 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:50.325 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:13:50.325 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:50.325 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:50.325 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:50.325 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:50.326 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:50.326 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:13:50.326 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:13:50.326 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:13:50.326 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:13:50.326 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:13:50.326 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:13:50.326 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:13:50.326 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:50.326 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:13:50.326 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:13:50.326 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:50.326 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:50.326 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:13:50.326 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:13:50.326 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:50.326 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:13:50.326 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:13:50.326 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:13:50.326 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:13:50.326 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:50.326 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:13:50.326 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:13:50.326 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:50.326 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:50.326 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:13:50.326 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:50.326 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:50.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:50.326 --rc genhtml_branch_coverage=1 00:13:50.326 --rc genhtml_function_coverage=1 00:13:50.326 --rc genhtml_legend=1 00:13:50.326 --rc geninfo_all_blocks=1 00:13:50.326 --rc geninfo_unexecuted_blocks=1 00:13:50.326 00:13:50.326 ' 00:13:50.326 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:50.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:50.326 --rc genhtml_branch_coverage=1 00:13:50.326 --rc genhtml_function_coverage=1 00:13:50.326 --rc genhtml_legend=1 00:13:50.326 --rc geninfo_all_blocks=1 00:13:50.326 --rc geninfo_unexecuted_blocks=1 00:13:50.326 00:13:50.326 ' 00:13:50.326 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:50.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:50.326 --rc genhtml_branch_coverage=1 00:13:50.326 --rc genhtml_function_coverage=1 00:13:50.326 --rc genhtml_legend=1 00:13:50.326 --rc geninfo_all_blocks=1 00:13:50.326 --rc geninfo_unexecuted_blocks=1 00:13:50.326 00:13:50.326 ' 00:13:50.326 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:50.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:50.326 --rc genhtml_branch_coverage=1 00:13:50.326 --rc genhtml_function_coverage=1 00:13:50.326 --rc genhtml_legend=1 00:13:50.326 --rc geninfo_all_blocks=1 00:13:50.326 --rc geninfo_unexecuted_blocks=1 00:13:50.326 00:13:50.326 ' 00:13:50.326 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:50.326 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:13:50.326 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:50.326 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:50.326 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:50.326 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:50.326 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:50.326 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:50.326 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:50.326 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:50.326 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:50.326 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:50.585 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:13:50.585 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:13:50.585 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:50.585 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:50.585 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:50.585 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:50.585 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:50.585 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:13:50.585 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:50.585 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:50.585 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:50.585 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.585 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.585 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.585 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:13:50.585 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.585 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:13:50.585 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:50.585 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:50.586 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:50.586 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:50.586 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:50.586 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:50.586 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:50.586 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:50.586 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:50.586 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:50.586 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:50.586 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:50.586 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:13:50.586 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:50.586 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:13:50.586 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:50.586 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:50.586 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:50.586 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:50.586 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:50.586 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:50.586 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:50.586 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:50.586 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:50.586 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:50.586 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:50.586 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:50.586 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:50.586 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:50.586 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:50.586 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:50.586 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:50.586 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:50.586 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:50.586 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:50.586 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:50.586 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:50.586 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:50.586 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:50.586 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:50.586 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:50.586 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:50.586 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:50.586 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:50.586 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:50.586 14:18:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:50.586 Cannot find device "nvmf_init_br" 00:13:50.586 14:18:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:13:50.586 14:18:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:50.586 Cannot find device "nvmf_init_br2" 00:13:50.586 14:18:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:13:50.586 14:18:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:50.586 Cannot find device "nvmf_tgt_br" 00:13:50.586 14:18:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:13:50.586 14:18:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:50.586 Cannot find device "nvmf_tgt_br2" 00:13:50.586 14:18:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:13:50.586 14:18:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:50.586 Cannot find device "nvmf_init_br" 00:13:50.586 14:18:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:13:50.586 14:18:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:50.586 Cannot find device "nvmf_init_br2" 00:13:50.586 14:18:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:13:50.586 14:18:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:50.586 Cannot find device "nvmf_tgt_br" 00:13:50.586 14:18:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:13:50.586 14:18:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:50.586 Cannot find device "nvmf_tgt_br2" 00:13:50.586 14:18:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:13:50.586 14:18:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:50.586 Cannot find device "nvmf_br" 00:13:50.586 14:18:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:13:50.586 14:18:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:50.586 Cannot find device "nvmf_init_if" 00:13:50.586 14:18:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:13:50.586 14:18:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:50.586 Cannot find device "nvmf_init_if2" 00:13:50.586 14:18:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:13:50.586 14:18:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:50.586 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:50.586 14:18:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:13:50.586 14:18:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:50.846 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:50.846 14:18:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:13:50.846 14:18:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:50.846 14:18:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:50.846 14:18:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:50.846 14:18:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:50.846 14:18:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:50.846 14:18:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:50.846 14:18:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:50.846 14:18:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:50.846 14:18:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:50.846 14:18:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:50.846 14:18:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:50.846 14:18:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:50.846 14:18:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:50.846 14:18:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:50.846 14:18:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:50.846 14:18:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:50.846 14:18:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:50.846 14:18:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:50.846 14:18:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:50.846 14:18:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:50.846 14:18:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:50.846 14:18:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:50.846 14:18:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:50.846 14:18:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:50.846 14:18:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:50.846 14:18:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:50.846 14:18:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:50.846 14:18:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:50.846 14:18:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:50.846 14:18:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:50.846 14:18:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:50.846 14:18:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:51.106 14:18:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:51.106 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:51.106 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.118 ms 00:13:51.106 00:13:51.106 --- 10.0.0.3 ping statistics --- 00:13:51.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:51.106 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:13:51.106 14:18:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:51.106 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:51.106 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.109 ms 00:13:51.106 00:13:51.106 --- 10.0.0.4 ping statistics --- 00:13:51.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:51.106 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:13:51.106 14:18:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:51.106 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:51.106 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:13:51.106 00:13:51.106 --- 10.0.0.1 ping statistics --- 00:13:51.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:51.106 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:13:51.106 14:18:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:51.106 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:51.106 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:13:51.106 00:13:51.106 --- 10.0.0.2 ping statistics --- 00:13:51.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:51.106 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:13:51.106 14:18:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:51.106 14:18:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 00:13:51.106 14:18:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:51.106 14:18:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:51.106 14:18:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:51.106 14:18:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:51.106 14:18:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:51.106 14:18:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:51.106 14:18:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:51.106 14:18:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:13:51.106 14:18:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:13:51.106 14:18:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:13:51.106 14:18:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:51.106 14:18:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:51.106 14:18:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:13:51.106 14:18:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=68091 00:13:51.106 14:18:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:51.106 14:18:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 68091 00:13:51.106 14:18:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@833 -- # '[' -z 68091 ']' 00:13:51.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:51.106 14:18:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:51.106 14:18:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:51.106 14:18:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:51.106 14:18:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:51.106 14:18:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:13:51.106 [2024-11-06 14:18:18.691823] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:13:51.106 [2024-11-06 14:18:18.692029] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:51.382 [2024-11-06 14:18:18.886269] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:51.673 [2024-11-06 14:18:19.022838] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:51.673 [2024-11-06 14:18:19.022928] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:51.673 [2024-11-06 14:18:19.022948] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:51.673 [2024-11-06 14:18:19.022962] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:51.673 [2024-11-06 14:18:19.022976] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:51.673 [2024-11-06 14:18:19.025184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:51.673 [2024-11-06 14:18:19.025389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:51.673 [2024-11-06 14:18:19.025517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:51.673 [2024-11-06 14:18:19.025763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:51.673 [2024-11-06 14:18:19.262151] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:51.931 14:18:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:51.931 14:18:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@866 -- # return 0 00:13:51.931 14:18:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:51.931 14:18:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:51.931 14:18:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:13:52.191 14:18:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:52.191 14:18:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:52.191 [2024-11-06 14:18:19.819799] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:52.449 14:18:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:13:52.707 Malloc0 00:13:52.707 14:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:13:52.966 14:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:53.224 14:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:13:53.224 [2024-11-06 14:18:20.845949] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:53.483 14:18:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:13:53.483 [2024-11-06 14:18:21.082023] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:13:53.483 14:18:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid=406d54d0-5e94-472a-a2b3-4291f3ac81e0 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:13:53.740 14:18:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid=406d54d0-5e94-472a-a2b3-4291f3ac81e0 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:13:53.999 14:18:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:13:53.999 14:18:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # local i=0 00:13:53.999 14:18:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:13:53.999 14:18:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:13:53.999 14:18:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # sleep 2 00:13:55.902 14:18:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:13:55.902 14:18:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:13:55.902 14:18:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:13:55.902 14:18:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:13:55.902 14:18:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:13:55.902 14:18:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # return 0 00:13:55.902 14:18:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:13:55.902 14:18:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:13:55.902 14:18:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:13:55.902 14:18:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:13:55.902 14:18:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:13:55.902 14:18:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:13:55.902 14:18:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:13:55.902 14:18:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:13:55.902 14:18:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:13:55.902 14:18:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:13:55.902 14:18:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:13:55.902 14:18:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:13:55.902 14:18:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:13:55.902 14:18:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:13:55.902 14:18:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:13:55.902 14:18:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:55.902 14:18:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:13:55.902 14:18:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:13:55.902 14:18:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:13:55.902 14:18:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:13:55.902 14:18:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:13:55.902 14:18:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:55.902 14:18:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:13:55.902 14:18:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:13:55.902 14:18:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:13:55.902 14:18:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:13:55.902 14:18:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=68181 00:13:55.902 14:18:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:13:55.902 14:18:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:13:55.902 [global] 00:13:55.902 thread=1 00:13:55.902 invalidate=1 00:13:55.902 rw=randrw 00:13:55.902 time_based=1 00:13:55.902 runtime=6 00:13:55.902 ioengine=libaio 00:13:55.902 direct=1 00:13:55.902 bs=4096 00:13:55.902 iodepth=128 00:13:55.902 norandommap=0 00:13:55.902 numjobs=1 00:13:55.902 00:13:55.902 verify_dump=1 00:13:55.902 verify_backlog=512 00:13:55.902 verify_state_save=0 00:13:55.902 do_verify=1 00:13:55.902 verify=crc32c-intel 00:13:55.902 [job0] 00:13:55.902 filename=/dev/nvme0n1 00:13:55.902 Could not set queue depth (nvme0n1) 00:13:56.161 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:56.161 fio-3.35 00:13:56.161 Starting 1 thread 00:13:57.121 14:18:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:13:57.121 14:18:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:13:57.380 14:18:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:13:57.380 14:18:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:13:57.380 14:18:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:57.380 14:18:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:13:57.380 14:18:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:13:57.380 14:18:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:13:57.380 14:18:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:13:57.380 14:18:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:13:57.380 14:18:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:57.380 14:18:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:13:57.380 14:18:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:13:57.380 14:18:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:13:57.380 14:18:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:13:57.639 14:18:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:13:57.897 14:18:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:13:57.898 14:18:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:13:57.898 14:18:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:57.898 14:18:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:13:57.898 14:18:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:13:57.898 14:18:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:13:57.898 14:18:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:13:57.898 14:18:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:13:57.898 14:18:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:57.898 14:18:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:13:57.898 14:18:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:13:57.898 14:18:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:13:57.898 14:18:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 68181 00:14:03.199 00:14:03.199 job0: (groupid=0, jobs=1): err= 0: pid=68202: Wed Nov 6 14:18:29 2024 00:14:03.199 read: IOPS=9680, BW=37.8MiB/s (39.6MB/s)(227MiB/6006msec) 00:14:03.199 slat (usec): min=5, max=6926, avg=57.67, stdev=204.52 00:14:03.199 clat (usec): min=1404, max=17715, avg=9078.52, stdev=1629.65 00:14:03.199 lat (usec): min=1464, max=17747, avg=9136.19, stdev=1636.05 00:14:03.199 clat percentiles (usec): 00:14:03.199 | 1.00th=[ 5211], 5.00th=[ 6718], 10.00th=[ 7701], 20.00th=[ 8225], 00:14:03.199 | 30.00th=[ 8455], 40.00th=[ 8586], 50.00th=[ 8848], 60.00th=[ 9110], 00:14:03.199 | 70.00th=[ 9372], 80.00th=[ 9765], 90.00th=[10945], 95.00th=[12780], 00:14:03.199 | 99.00th=[14091], 99.50th=[14484], 99.90th=[16188], 99.95th=[16909], 00:14:03.199 | 99.99th=[17433] 00:14:03.199 bw ( KiB/s): min= 7168, max=26142, per=50.19%, avg=19434.00, stdev=6491.41, samples=11 00:14:03.199 iops : min= 1792, max= 6535, avg=4858.45, stdev=1622.80, samples=11 00:14:03.199 write: IOPS=5687, BW=22.2MiB/s (23.3MB/s)(115MiB/5158msec); 0 zone resets 00:14:03.199 slat (usec): min=11, max=5210, avg=72.84, stdev=147.02 00:14:03.199 clat (usec): min=923, max=16805, avg=7958.70, stdev=1576.55 00:14:03.199 lat (usec): min=1029, max=17206, avg=8031.53, stdev=1581.84 00:14:03.199 clat percentiles (usec): 00:14:03.199 | 1.00th=[ 4228], 5.00th=[ 5211], 10.00th=[ 5866], 20.00th=[ 7046], 00:14:03.199 | 30.00th=[ 7504], 40.00th=[ 7767], 50.00th=[ 8029], 60.00th=[ 8225], 00:14:03.199 | 70.00th=[ 8455], 80.00th=[ 8848], 90.00th=[ 9634], 95.00th=[10552], 00:14:03.199 | 99.00th=[12780], 99.50th=[13960], 99.90th=[15795], 99.95th=[16057], 00:14:03.199 | 99.99th=[16909] 00:14:03.199 bw ( KiB/s): min= 7560, max=25706, per=85.55%, avg=19463.91, stdev=6152.71, samples=11 00:14:03.199 iops : min= 1890, max= 6426, avg=4865.82, stdev=1538.21, samples=11 00:14:03.199 lat (usec) : 1000=0.01% 00:14:03.199 lat (msec) : 2=0.02%, 4=0.37%, 10=86.57%, 20=13.04% 00:14:03.199 cpu : usr=7.46%, sys=28.97%, ctx=5289, majf=0, minf=139 00:14:03.199 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:14:03.199 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:03.199 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:03.199 issued rwts: total=58139,29337,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:03.199 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:03.199 00:14:03.199 Run status group 0 (all jobs): 00:14:03.199 READ: bw=37.8MiB/s (39.6MB/s), 37.8MiB/s-37.8MiB/s (39.6MB/s-39.6MB/s), io=227MiB (238MB), run=6006-6006msec 00:14:03.199 WRITE: bw=22.2MiB/s (23.3MB/s), 22.2MiB/s-22.2MiB/s (23.3MB/s-23.3MB/s), io=115MiB (120MB), run=5158-5158msec 00:14:03.199 00:14:03.199 Disk stats (read/write): 00:14:03.199 nvme0n1: ios=57309/28751, merge=0/0, ticks=489678/209085, in_queue=698763, util=98.77% 00:14:03.199 14:18:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:14:03.199 14:18:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:14:03.199 14:18:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:14:03.199 14:18:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:14:03.199 14:18:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:14:03.199 14:18:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:14:03.199 14:18:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:14:03.199 14:18:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:14:03.199 14:18:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:14:03.199 14:18:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:14:03.199 14:18:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:14:03.199 14:18:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:14:03.199 14:18:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:14:03.199 14:18:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:14:03.199 14:18:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:14:03.199 14:18:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=68287 00:14:03.199 14:18:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:14:03.199 14:18:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:14:03.199 [global] 00:14:03.199 thread=1 00:14:03.199 invalidate=1 00:14:03.199 rw=randrw 00:14:03.199 time_based=1 00:14:03.199 runtime=6 00:14:03.199 ioengine=libaio 00:14:03.199 direct=1 00:14:03.199 bs=4096 00:14:03.199 iodepth=128 00:14:03.199 norandommap=0 00:14:03.199 numjobs=1 00:14:03.199 00:14:03.199 verify_dump=1 00:14:03.199 verify_backlog=512 00:14:03.199 verify_state_save=0 00:14:03.199 do_verify=1 00:14:03.199 verify=crc32c-intel 00:14:03.199 [job0] 00:14:03.200 filename=/dev/nvme0n1 00:14:03.200 Could not set queue depth (nvme0n1) 00:14:03.200 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:03.200 fio-3.35 00:14:03.200 Starting 1 thread 00:14:03.766 14:18:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:14:04.025 14:18:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:14:04.284 14:18:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:14:04.284 14:18:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:14:04.284 14:18:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:14:04.285 14:18:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:14:04.285 14:18:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:14:04.285 14:18:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:14:04.285 14:18:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:14:04.285 14:18:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:14:04.285 14:18:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:14:04.285 14:18:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:14:04.285 14:18:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:14:04.285 14:18:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:14:04.285 14:18:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:14:04.544 14:18:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:14:04.544 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:14:04.544 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:14:04.544 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:14:04.544 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:14:04.544 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:14:04.544 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:14:04.544 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:14:04.544 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:14:04.544 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:14:04.544 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:14:04.544 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:14:04.544 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:14:04.544 14:18:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 68287 00:14:09.817 00:14:09.817 job0: (groupid=0, jobs=1): err= 0: pid=68308: Wed Nov 6 14:18:36 2024 00:14:09.817 read: IOPS=9800, BW=38.3MiB/s (40.1MB/s)(230MiB/6002msec) 00:14:09.817 slat (usec): min=4, max=7329, avg=51.88, stdev=192.32 00:14:09.817 clat (usec): min=248, max=23927, avg=9072.02, stdev=2892.06 00:14:09.817 lat (usec): min=259, max=23941, avg=9123.89, stdev=2901.26 00:14:09.817 clat percentiles (usec): 00:14:09.817 | 1.00th=[ 1450], 5.00th=[ 4228], 10.00th=[ 5669], 20.00th=[ 7439], 00:14:09.817 | 30.00th=[ 8356], 40.00th=[ 8717], 50.00th=[ 8979], 60.00th=[ 9241], 00:14:09.817 | 70.00th=[ 9896], 80.00th=[10683], 90.00th=[11994], 95.00th=[14222], 00:14:09.817 | 99.00th=[18220], 99.50th=[19268], 99.90th=[21890], 99.95th=[22152], 00:14:09.817 | 99.99th=[23200] 00:14:09.817 bw ( KiB/s): min= 4064, max=30608, per=51.51%, avg=20194.18, stdev=8835.00, samples=11 00:14:09.817 iops : min= 1016, max= 7652, avg=5048.55, stdev=2208.75, samples=11 00:14:09.817 write: IOPS=6150, BW=24.0MiB/s (25.2MB/s)(119MiB/4960msec); 0 zone resets 00:14:09.817 slat (usec): min=11, max=2090, avg=63.03, stdev=120.71 00:14:09.817 clat (usec): min=330, max=21794, avg=7473.45, stdev=2711.30 00:14:09.817 lat (usec): min=370, max=21831, avg=7536.48, stdev=2722.11 00:14:09.817 clat percentiles (usec): 00:14:09.817 | 1.00th=[ 1237], 5.00th=[ 3064], 10.00th=[ 4080], 20.00th=[ 5145], 00:14:09.817 | 30.00th=[ 6128], 40.00th=[ 7308], 50.00th=[ 7767], 60.00th=[ 8160], 00:14:09.817 | 70.00th=[ 8586], 80.00th=[ 9110], 90.00th=[10421], 95.00th=[11863], 00:14:09.817 | 99.00th=[15401], 99.50th=[16057], 99.90th=[18482], 99.95th=[19792], 00:14:09.817 | 99.99th=[20579] 00:14:09.817 bw ( KiB/s): min= 4528, max=30000, per=82.36%, avg=20261.64, stdev=8591.08, samples=11 00:14:09.817 iops : min= 1132, max= 7500, avg=5065.36, stdev=2147.72, samples=11 00:14:09.817 lat (usec) : 250=0.01%, 500=0.02%, 750=0.08%, 1000=0.21% 00:14:09.817 lat (msec) : 2=2.08%, 4=3.87%, 10=69.99%, 20=23.50%, 50=0.23% 00:14:09.817 cpu : usr=6.38%, sys=29.61%, ctx=5885, majf=0, minf=127 00:14:09.817 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:14:09.817 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:09.817 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:09.817 issued rwts: total=58823,30504,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:09.817 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:09.817 00:14:09.817 Run status group 0 (all jobs): 00:14:09.817 READ: bw=38.3MiB/s (40.1MB/s), 38.3MiB/s-38.3MiB/s (40.1MB/s-40.1MB/s), io=230MiB (241MB), run=6002-6002msec 00:14:09.817 WRITE: bw=24.0MiB/s (25.2MB/s), 24.0MiB/s-24.0MiB/s (25.2MB/s-25.2MB/s), io=119MiB (125MB), run=4960-4960msec 00:14:09.817 00:14:09.817 Disk stats (read/write): 00:14:09.817 nvme0n1: ios=58016/30005, merge=0/0, ticks=497941/205204, in_queue=703145, util=98.70% 00:14:09.817 14:18:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:09.817 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:14:09.817 14:18:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:09.817 14:18:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1221 -- # local i=0 00:14:09.817 14:18:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:14:09.817 14:18:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:09.817 14:18:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:09.817 14:18:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:14:09.817 14:18:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1233 -- # return 0 00:14:09.817 14:18:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:09.817 14:18:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:14:09.817 14:18:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:14:09.817 14:18:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:14:09.817 14:18:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:14:09.817 14:18:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:09.817 14:18:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:14:09.817 14:18:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:09.817 14:18:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:14:09.817 14:18:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:09.817 14:18:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:09.817 rmmod nvme_tcp 00:14:09.817 rmmod nvme_fabrics 00:14:09.817 rmmod nvme_keyring 00:14:09.817 14:18:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:09.817 14:18:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:14:09.817 14:18:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:14:09.817 14:18:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 68091 ']' 00:14:09.818 14:18:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 68091 00:14:09.818 14:18:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@952 -- # '[' -z 68091 ']' 00:14:09.818 14:18:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@956 -- # kill -0 68091 00:14:09.818 14:18:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@957 -- # uname 00:14:09.818 14:18:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:09.818 14:18:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 68091 00:14:09.818 killing process with pid 68091 00:14:09.818 14:18:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:09.818 14:18:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:09.818 14:18:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@970 -- # echo 'killing process with pid 68091' 00:14:09.818 14:18:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@971 -- # kill 68091 00:14:09.818 14:18:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@976 -- # wait 68091 00:14:11.199 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:11.199 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:11.199 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:11.199 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:14:11.199 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:14:11.199 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:11.199 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:14:11.199 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:11.199 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:11.199 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:11.199 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:11.199 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:11.199 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:11.199 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:11.199 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:11.199 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:11.199 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:11.199 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:11.199 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:11.199 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:11.458 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:11.458 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:11.458 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:11.458 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:11.458 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:11.458 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:11.458 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:14:11.458 ************************************ 00:14:11.458 END TEST nvmf_target_multipath 00:14:11.458 ************************************ 00:14:11.458 00:14:11.458 real 0m21.268s 00:14:11.458 user 1m15.907s 00:14:11.458 sys 0m10.921s 00:14:11.458 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:11.458 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:14:11.458 14:18:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:14:11.458 14:18:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:11.458 14:18:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:11.458 14:18:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:11.459 ************************************ 00:14:11.459 START TEST nvmf_zcopy 00:14:11.459 ************************************ 00:14:11.459 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:14:11.718 * Looking for test storage... 00:14:11.718 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:11.718 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:11.718 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:14:11.718 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:11.718 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:11.718 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:11.718 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:11.718 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:11.718 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:14:11.718 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:14:11.718 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:14:11.718 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:14:11.718 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:14:11.718 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:14:11.718 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:14:11.718 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:11.718 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:14:11.718 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:14:11.718 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:11.718 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:11.718 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:14:11.718 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:14:11.719 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:11.719 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:14:11.719 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:14:11.719 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:14:11.719 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:14:11.719 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:11.719 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:14:11.719 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:14:11.719 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:11.719 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:11.719 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:14:11.719 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:11.719 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:11.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:11.719 --rc genhtml_branch_coverage=1 00:14:11.719 --rc genhtml_function_coverage=1 00:14:11.719 --rc genhtml_legend=1 00:14:11.719 --rc geninfo_all_blocks=1 00:14:11.719 --rc geninfo_unexecuted_blocks=1 00:14:11.719 00:14:11.719 ' 00:14:11.719 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:11.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:11.719 --rc genhtml_branch_coverage=1 00:14:11.719 --rc genhtml_function_coverage=1 00:14:11.719 --rc genhtml_legend=1 00:14:11.719 --rc geninfo_all_blocks=1 00:14:11.719 --rc geninfo_unexecuted_blocks=1 00:14:11.719 00:14:11.719 ' 00:14:11.719 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:11.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:11.719 --rc genhtml_branch_coverage=1 00:14:11.719 --rc genhtml_function_coverage=1 00:14:11.719 --rc genhtml_legend=1 00:14:11.719 --rc geninfo_all_blocks=1 00:14:11.719 --rc geninfo_unexecuted_blocks=1 00:14:11.719 00:14:11.719 ' 00:14:11.719 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:11.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:11.719 --rc genhtml_branch_coverage=1 00:14:11.719 --rc genhtml_function_coverage=1 00:14:11.719 --rc genhtml_legend=1 00:14:11.719 --rc geninfo_all_blocks=1 00:14:11.719 --rc geninfo_unexecuted_blocks=1 00:14:11.719 00:14:11.719 ' 00:14:11.719 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:11.719 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:14:11.719 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:11.719 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:11.719 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:11.719 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:11.719 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:11.719 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:11.719 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:11.719 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:11.719 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:11.719 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:11.719 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:14:11.719 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:14:11.719 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:11.719 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:11.719 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:11.719 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:11.719 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:11.719 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:14:11.719 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:11.719 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:11.719 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:11.719 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.719 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.719 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.719 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:14:11.719 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.719 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:14:11.719 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:11.719 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:11.719 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:11.719 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:11.719 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:11.719 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:11.719 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:11.719 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:11.719 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:11.719 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:11.719 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:14:11.719 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:11.719 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:11.719 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:11.719 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:11.719 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:11.719 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:11.719 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:11.719 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:11.719 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:11.719 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:11.719 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:11.719 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:11.719 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:11.719 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:11.719 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:11.719 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:11.719 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:11.719 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:11.719 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:11.719 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:11.719 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:11.719 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:11.719 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:11.719 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:11.719 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:11.720 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:11.720 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:11.720 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:11.720 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:11.720 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:11.720 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:11.720 Cannot find device "nvmf_init_br" 00:14:11.720 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:14:11.720 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:11.979 Cannot find device "nvmf_init_br2" 00:14:11.979 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:14:11.979 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:11.979 Cannot find device "nvmf_tgt_br" 00:14:11.979 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:14:11.979 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:11.979 Cannot find device "nvmf_tgt_br2" 00:14:11.979 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:14:11.979 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:11.979 Cannot find device "nvmf_init_br" 00:14:11.979 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:14:11.979 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:11.979 Cannot find device "nvmf_init_br2" 00:14:11.979 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:14:11.979 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:11.979 Cannot find device "nvmf_tgt_br" 00:14:11.979 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:14:11.979 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:11.979 Cannot find device "nvmf_tgt_br2" 00:14:11.979 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:14:11.979 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:11.979 Cannot find device "nvmf_br" 00:14:11.979 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:14:11.979 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:11.979 Cannot find device "nvmf_init_if" 00:14:11.979 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:14:11.979 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:11.979 Cannot find device "nvmf_init_if2" 00:14:11.979 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:14:11.979 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:11.979 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:11.979 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:14:11.979 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:11.979 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:11.979 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:14:11.979 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:11.979 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:11.979 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:11.979 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:11.979 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:12.238 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:12.238 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:12.238 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:12.238 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:12.238 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:12.238 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:12.238 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:12.238 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:12.238 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:12.238 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:12.238 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:12.238 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:12.238 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:12.238 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:12.238 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:12.238 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:12.238 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:12.238 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:12.238 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:12.238 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:12.238 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:12.238 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:12.238 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:12.238 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:12.238 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:12.238 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:12.238 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:12.238 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:12.238 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:12.238 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.569 ms 00:14:12.238 00:14:12.238 --- 10.0.0.3 ping statistics --- 00:14:12.238 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:12.238 rtt min/avg/max/mdev = 0.569/0.569/0.569/0.000 ms 00:14:12.238 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:12.238 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:12.238 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.101 ms 00:14:12.238 00:14:12.238 --- 10.0.0.4 ping statistics --- 00:14:12.238 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:12.238 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:14:12.238 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:12.238 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:12.238 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:14:12.238 00:14:12.238 --- 10.0.0.1 ping statistics --- 00:14:12.238 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:12.238 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:14:12.238 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:12.497 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:12.497 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 00:14:12.497 00:14:12.497 --- 10.0.0.2 ping statistics --- 00:14:12.497 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:12.497 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:14:12.497 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:12.497 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 00:14:12.497 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:12.497 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:12.497 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:12.497 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:12.497 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:12.497 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:12.497 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:12.497 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:14:12.497 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:12.497 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:12.497 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:12.497 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=68626 00:14:12.497 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 68626 00:14:12.497 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:12.497 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@833 -- # '[' -z 68626 ']' 00:14:12.497 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:12.497 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:12.497 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:12.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:12.497 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:12.497 14:18:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:12.497 [2024-11-06 14:18:40.029111] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:14:12.497 [2024-11-06 14:18:40.029310] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:12.756 [2024-11-06 14:18:40.221785] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:12.756 [2024-11-06 14:18:40.350779] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:12.756 [2024-11-06 14:18:40.350869] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:12.756 [2024-11-06 14:18:40.350888] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:12.756 [2024-11-06 14:18:40.350911] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:12.756 [2024-11-06 14:18:40.350926] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:12.756 [2024-11-06 14:18:40.352429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:13.054 [2024-11-06 14:18:40.587118] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:13.313 14:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:13.313 14:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@866 -- # return 0 00:14:13.313 14:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:13.313 14:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:13.313 14:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:13.313 14:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:13.313 14:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:14:13.314 14:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:14:13.314 14:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.314 14:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:13.620 [2024-11-06 14:18:40.949487] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:13.620 14:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.620 14:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:13.620 14:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.620 14:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:13.620 14:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.620 14:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:13.620 14:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.620 14:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:13.620 [2024-11-06 14:18:40.973752] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:13.620 14:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.620 14:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:14:13.620 14:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.620 14:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:13.620 14:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.620 14:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:14:13.620 14:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.620 14:18:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:13.620 malloc0 00:14:13.620 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.620 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:13.620 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.620 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:13.620 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.620 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:14:13.620 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:14:13.620 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:14:13.620 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:14:13.620 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:14:13.620 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:14:13.620 { 00:14:13.620 "params": { 00:14:13.620 "name": "Nvme$subsystem", 00:14:13.620 "trtype": "$TEST_TRANSPORT", 00:14:13.620 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:13.620 "adrfam": "ipv4", 00:14:13.620 "trsvcid": "$NVMF_PORT", 00:14:13.620 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:13.620 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:13.620 "hdgst": ${hdgst:-false}, 00:14:13.620 "ddgst": ${ddgst:-false} 00:14:13.620 }, 00:14:13.620 "method": "bdev_nvme_attach_controller" 00:14:13.620 } 00:14:13.620 EOF 00:14:13.620 )") 00:14:13.620 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:14:13.620 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:14:13.620 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:14:13.620 14:18:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:14:13.620 "params": { 00:14:13.620 "name": "Nvme1", 00:14:13.620 "trtype": "tcp", 00:14:13.620 "traddr": "10.0.0.3", 00:14:13.620 "adrfam": "ipv4", 00:14:13.620 "trsvcid": "4420", 00:14:13.620 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:13.620 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:13.620 "hdgst": false, 00:14:13.620 "ddgst": false 00:14:13.620 }, 00:14:13.620 "method": "bdev_nvme_attach_controller" 00:14:13.620 }' 00:14:13.620 [2024-11-06 14:18:41.170920] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:14:13.620 [2024-11-06 14:18:41.171130] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68659 ] 00:14:13.880 [2024-11-06 14:18:41.364229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:14.140 [2024-11-06 14:18:41.520119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:14.140 [2024-11-06 14:18:41.773389] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:14.399 Running I/O for 10 seconds... 00:14:16.718 5883.00 IOPS, 45.96 MiB/s [2024-11-06T14:18:45.290Z] 5916.50 IOPS, 46.22 MiB/s [2024-11-06T14:18:46.226Z] 5923.67 IOPS, 46.28 MiB/s [2024-11-06T14:18:47.162Z] 5921.50 IOPS, 46.26 MiB/s [2024-11-06T14:18:48.098Z] 5915.00 IOPS, 46.21 MiB/s [2024-11-06T14:18:49.034Z] 5917.50 IOPS, 46.23 MiB/s [2024-11-06T14:18:50.411Z] 5920.14 IOPS, 46.25 MiB/s [2024-11-06T14:18:51.348Z] 5916.25 IOPS, 46.22 MiB/s [2024-11-06T14:18:52.285Z] 5911.11 IOPS, 46.18 MiB/s [2024-11-06T14:18:52.285Z] 5907.70 IOPS, 46.15 MiB/s 00:14:24.650 Latency(us) 00:14:24.650 [2024-11-06T14:18:52.285Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:24.650 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:14:24.650 Verification LBA range: start 0x0 length 0x1000 00:14:24.650 Nvme1n1 : 10.01 5909.80 46.17 0.00 0.00 21602.45 562.58 29267.48 00:14:24.650 [2024-11-06T14:18:52.285Z] =================================================================================================================== 00:14:24.650 [2024-11-06T14:18:52.285Z] Total : 5909.80 46.17 0.00 0.00 21602.45 562.58 29267.48 00:14:26.029 14:18:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=68794 00:14:26.029 14:18:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:14:26.029 14:18:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:26.029 14:18:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:14:26.029 14:18:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:14:26.029 14:18:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:14:26.029 14:18:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:14:26.029 14:18:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:14:26.029 [2024-11-06 14:18:53.242654] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.029 [2024-11-06 14:18:53.242709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.029 14:18:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:14:26.029 { 00:14:26.029 "params": { 00:14:26.029 "name": "Nvme$subsystem", 00:14:26.029 "trtype": "$TEST_TRANSPORT", 00:14:26.029 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:26.029 "adrfam": "ipv4", 00:14:26.029 "trsvcid": "$NVMF_PORT", 00:14:26.029 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:26.029 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:26.029 "hdgst": ${hdgst:-false}, 00:14:26.029 "ddgst": ${ddgst:-false} 00:14:26.029 }, 00:14:26.029 "method": "bdev_nvme_attach_controller" 00:14:26.029 } 00:14:26.029 EOF 00:14:26.029 )") 00:14:26.029 14:18:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:14:26.029 14:18:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:14:26.029 [2024-11-06 14:18:53.254499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.029 [2024-11-06 14:18:53.254547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.029 14:18:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:14:26.029 14:18:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:14:26.029 "params": { 00:14:26.029 "name": "Nvme1", 00:14:26.029 "trtype": "tcp", 00:14:26.029 "traddr": "10.0.0.3", 00:14:26.029 "adrfam": "ipv4", 00:14:26.029 "trsvcid": "4420", 00:14:26.029 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:26.029 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:26.029 "hdgst": false, 00:14:26.029 "ddgst": false 00:14:26.029 }, 00:14:26.029 "method": "bdev_nvme_attach_controller" 00:14:26.029 }' 00:14:26.029 [2024-11-06 14:18:53.266511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.029 [2024-11-06 14:18:53.266549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.029 [2024-11-06 14:18:53.278464] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.029 [2024-11-06 14:18:53.278508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.029 [2024-11-06 14:18:53.290508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.030 [2024-11-06 14:18:53.290544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.030 [2024-11-06 14:18:53.302486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.030 [2024-11-06 14:18:53.302527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.030 [2024-11-06 14:18:53.314471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.030 [2024-11-06 14:18:53.314505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.030 [2024-11-06 14:18:53.326487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.030 [2024-11-06 14:18:53.326523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.030 [2024-11-06 14:18:53.338548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.030 [2024-11-06 14:18:53.338583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.030 [2024-11-06 14:18:53.350487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.030 [2024-11-06 14:18:53.350524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.030 [2024-11-06 14:18:53.355415] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:14:26.030 [2024-11-06 14:18:53.355540] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68794 ] 00:14:26.030 [2024-11-06 14:18:53.362476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.030 [2024-11-06 14:18:53.362518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.030 [2024-11-06 14:18:53.374470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.030 [2024-11-06 14:18:53.374506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.030 [2024-11-06 14:18:53.386504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.030 [2024-11-06 14:18:53.386538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.030 [2024-11-06 14:18:53.398504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.030 [2024-11-06 14:18:53.398544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.030 [2024-11-06 14:18:53.410500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.030 [2024-11-06 14:18:53.410537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.030 [2024-11-06 14:18:53.422472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.030 [2024-11-06 14:18:53.422508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.030 [2024-11-06 14:18:53.434470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.030 [2024-11-06 14:18:53.434502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.030 [2024-11-06 14:18:53.446463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.030 [2024-11-06 14:18:53.446497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.030 [2024-11-06 14:18:53.458487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.030 [2024-11-06 14:18:53.458518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.030 [2024-11-06 14:18:53.470471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.030 [2024-11-06 14:18:53.470511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.030 [2024-11-06 14:18:53.482518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.030 [2024-11-06 14:18:53.482549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.030 [2024-11-06 14:18:53.494483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.030 [2024-11-06 14:18:53.494518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.030 [2024-11-06 14:18:53.506465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.030 [2024-11-06 14:18:53.506496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.030 [2024-11-06 14:18:53.518520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.030 [2024-11-06 14:18:53.518556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.030 [2024-11-06 14:18:53.530483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.030 [2024-11-06 14:18:53.530515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.030 [2024-11-06 14:18:53.542469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.030 [2024-11-06 14:18:53.542507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.030 [2024-11-06 14:18:53.547522] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:26.030 [2024-11-06 14:18:53.554460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.030 [2024-11-06 14:18:53.554491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.030 [2024-11-06 14:18:53.566468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.030 [2024-11-06 14:18:53.566502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.030 [2024-11-06 14:18:53.578482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.030 [2024-11-06 14:18:53.578513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.030 [2024-11-06 14:18:53.590481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.030 [2024-11-06 14:18:53.590515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.030 [2024-11-06 14:18:53.602475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.030 [2024-11-06 14:18:53.602506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.030 [2024-11-06 14:18:53.614486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.030 [2024-11-06 14:18:53.614521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.030 [2024-11-06 14:18:53.626515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.030 [2024-11-06 14:18:53.626547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.030 [2024-11-06 14:18:53.638471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.030 [2024-11-06 14:18:53.638507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.030 [2024-11-06 14:18:53.650500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.030 [2024-11-06 14:18:53.650532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.030 [2024-11-06 14:18:53.662540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.030 [2024-11-06 14:18:53.662581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.289 [2024-11-06 14:18:53.674472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.289 [2024-11-06 14:18:53.674505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.289 [2024-11-06 14:18:53.686478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.289 [2024-11-06 14:18:53.686513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.289 [2024-11-06 14:18:53.695102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:26.289 [2024-11-06 14:18:53.698473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.289 [2024-11-06 14:18:53.698516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.289 [2024-11-06 14:18:53.710469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.289 [2024-11-06 14:18:53.710503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.289 [2024-11-06 14:18:53.722458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.289 [2024-11-06 14:18:53.722506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.289 [2024-11-06 14:18:53.734463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.289 [2024-11-06 14:18:53.734498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.289 [2024-11-06 14:18:53.746487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.289 [2024-11-06 14:18:53.746518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.289 [2024-11-06 14:18:53.758453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.289 [2024-11-06 14:18:53.758487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.289 [2024-11-06 14:18:53.770494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.289 [2024-11-06 14:18:53.770526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.289 [2024-11-06 14:18:53.782491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.289 [2024-11-06 14:18:53.782526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.289 [2024-11-06 14:18:53.794482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.289 [2024-11-06 14:18:53.794513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.289 [2024-11-06 14:18:53.806466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.289 [2024-11-06 14:18:53.806503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.289 [2024-11-06 14:18:53.818477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.289 [2024-11-06 14:18:53.818509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.289 [2024-11-06 14:18:53.830474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.289 [2024-11-06 14:18:53.830511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.289 [2024-11-06 14:18:53.842478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.289 [2024-11-06 14:18:53.842522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.289 [2024-11-06 14:18:53.854447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.289 [2024-11-06 14:18:53.854488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.289 [2024-11-06 14:18:53.866484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.289 [2024-11-06 14:18:53.866516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.289 [2024-11-06 14:18:53.878498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.289 [2024-11-06 14:18:53.878532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.289 [2024-11-06 14:18:53.890471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.289 [2024-11-06 14:18:53.890502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.289 [2024-11-06 14:18:53.902580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.289 [2024-11-06 14:18:53.902614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.289 [2024-11-06 14:18:53.914503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.289 [2024-11-06 14:18:53.914536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.547 [2024-11-06 14:18:53.926449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.547 [2024-11-06 14:18:53.926486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.548 [2024-11-06 14:18:53.938485] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.548 [2024-11-06 14:18:53.938527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.548 [2024-11-06 14:18:53.950454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.548 [2024-11-06 14:18:53.950487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.548 [2024-11-06 14:18:53.952131] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:26.548 [2024-11-06 14:18:53.966492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.548 [2024-11-06 14:18:53.966526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.548 [2024-11-06 14:18:53.982511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.548 [2024-11-06 14:18:53.982559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.548 [2024-11-06 14:18:53.998494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.548 [2024-11-06 14:18:53.998529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.548 [2024-11-06 14:18:54.014498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.548 [2024-11-06 14:18:54.014533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.548 [2024-11-06 14:18:54.030477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.548 [2024-11-06 14:18:54.030509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.548 [2024-11-06 14:18:54.046474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.548 [2024-11-06 14:18:54.046509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.548 [2024-11-06 14:18:54.062491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.548 [2024-11-06 14:18:54.062523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.548 [2024-11-06 14:18:54.078452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.548 [2024-11-06 14:18:54.078492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.548 [2024-11-06 14:18:54.094524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.548 [2024-11-06 14:18:54.094559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.548 [2024-11-06 14:18:54.106512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.548 [2024-11-06 14:18:54.106551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.548 [2024-11-06 14:18:54.122488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.548 [2024-11-06 14:18:54.122527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.548 [2024-11-06 14:18:54.134537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.548 [2024-11-06 14:18:54.134573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.548 [2024-11-06 14:18:54.146505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.548 [2024-11-06 14:18:54.146547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.548 [2024-11-06 14:18:54.162654] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.548 [2024-11-06 14:18:54.162700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.548 [2024-11-06 14:18:54.178578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.548 [2024-11-06 14:18:54.178621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.548 Running I/O for 5 seconds... 00:14:26.806 [2024-11-06 14:18:54.196024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.806 [2024-11-06 14:18:54.196074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.806 [2024-11-06 14:18:54.212635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.806 [2024-11-06 14:18:54.212678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.806 [2024-11-06 14:18:54.229537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.806 [2024-11-06 14:18:54.229595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.806 [2024-11-06 14:18:54.246263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.806 [2024-11-06 14:18:54.246303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.806 [2024-11-06 14:18:54.263469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.807 [2024-11-06 14:18:54.263513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.807 [2024-11-06 14:18:54.280045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.807 [2024-11-06 14:18:54.280082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.807 [2024-11-06 14:18:54.297328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.807 [2024-11-06 14:18:54.297372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.807 [2024-11-06 14:18:54.313772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.807 [2024-11-06 14:18:54.313813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.807 [2024-11-06 14:18:54.330686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.807 [2024-11-06 14:18:54.330731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.807 [2024-11-06 14:18:54.347383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.807 [2024-11-06 14:18:54.347435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.807 [2024-11-06 14:18:54.364615] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.807 [2024-11-06 14:18:54.364659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.807 [2024-11-06 14:18:54.381632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.807 [2024-11-06 14:18:54.381669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.807 [2024-11-06 14:18:54.397554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.807 [2024-11-06 14:18:54.397600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.807 [2024-11-06 14:18:54.415999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.807 [2024-11-06 14:18:54.416038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.807 [2024-11-06 14:18:54.430470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:26.807 [2024-11-06 14:18:54.430513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.065 [2024-11-06 14:18:54.445957] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.065 [2024-11-06 14:18:54.445994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.065 [2024-11-06 14:18:54.462995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.065 [2024-11-06 14:18:54.463034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.065 [2024-11-06 14:18:54.479839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.065 [2024-11-06 14:18:54.479886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.065 [2024-11-06 14:18:54.495695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.065 [2024-11-06 14:18:54.495738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.065 [2024-11-06 14:18:54.514173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.065 [2024-11-06 14:18:54.514210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.066 [2024-11-06 14:18:54.528632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.066 [2024-11-06 14:18:54.528704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.066 [2024-11-06 14:18:54.544569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.066 [2024-11-06 14:18:54.544605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.066 [2024-11-06 14:18:54.561680] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.066 [2024-11-06 14:18:54.561723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.066 [2024-11-06 14:18:54.578695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.066 [2024-11-06 14:18:54.578732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.066 [2024-11-06 14:18:54.595200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.066 [2024-11-06 14:18:54.595241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.066 [2024-11-06 14:18:54.616703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.066 [2024-11-06 14:18:54.616742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.066 [2024-11-06 14:18:54.631989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.066 [2024-11-06 14:18:54.632028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.066 [2024-11-06 14:18:54.641558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.066 [2024-11-06 14:18:54.641607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.066 [2024-11-06 14:18:54.658088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.066 [2024-11-06 14:18:54.658148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.066 [2024-11-06 14:18:54.675584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.066 [2024-11-06 14:18:54.675623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.066 [2024-11-06 14:18:54.690977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.066 [2024-11-06 14:18:54.691019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.325 [2024-11-06 14:18:54.700202] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.325 [2024-11-06 14:18:54.700252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.325 [2024-11-06 14:18:54.713266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.325 [2024-11-06 14:18:54.713307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.325 [2024-11-06 14:18:54.723643] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.325 [2024-11-06 14:18:54.723680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.325 [2024-11-06 14:18:54.734164] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.325 [2024-11-06 14:18:54.734204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.325 [2024-11-06 14:18:54.746283] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.325 [2024-11-06 14:18:54.746326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.325 [2024-11-06 14:18:54.755837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.325 [2024-11-06 14:18:54.755887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.325 [2024-11-06 14:18:54.767608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.325 [2024-11-06 14:18:54.767646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.325 [2024-11-06 14:18:54.778067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.325 [2024-11-06 14:18:54.778106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.325 [2024-11-06 14:18:54.789185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.325 [2024-11-06 14:18:54.789238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.325 [2024-11-06 14:18:54.799945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.325 [2024-11-06 14:18:54.799985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.325 [2024-11-06 14:18:54.812461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.325 [2024-11-06 14:18:54.812499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.325 [2024-11-06 14:18:54.822326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.325 [2024-11-06 14:18:54.822393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.325 [2024-11-06 14:18:54.836584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.325 [2024-11-06 14:18:54.836621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.325 [2024-11-06 14:18:54.851304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.325 [2024-11-06 14:18:54.851351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.325 [2024-11-06 14:18:54.868468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.325 [2024-11-06 14:18:54.868518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.325 [2024-11-06 14:18:54.884065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.325 [2024-11-06 14:18:54.884106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.325 [2024-11-06 14:18:54.902954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.325 [2024-11-06 14:18:54.902992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.325 [2024-11-06 14:18:54.917267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.325 [2024-11-06 14:18:54.917310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.325 [2024-11-06 14:18:54.933260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.325 [2024-11-06 14:18:54.933296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.325 [2024-11-06 14:18:54.954700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.325 [2024-11-06 14:18:54.954752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.584 [2024-11-06 14:18:54.969689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.584 [2024-11-06 14:18:54.969734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.584 [2024-11-06 14:18:54.987179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.584 [2024-11-06 14:18:54.987227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.584 [2024-11-06 14:18:55.003740] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.584 [2024-11-06 14:18:55.003782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.584 [2024-11-06 14:18:55.021474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.584 [2024-11-06 14:18:55.021519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.584 [2024-11-06 14:18:55.041554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.584 [2024-11-06 14:18:55.041596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.584 [2024-11-06 14:18:55.062148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.584 [2024-11-06 14:18:55.062195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.584 [2024-11-06 14:18:55.082476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.584 [2024-11-06 14:18:55.082518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.584 [2024-11-06 14:18:55.102529] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.584 [2024-11-06 14:18:55.102576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.584 [2024-11-06 14:18:55.120508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.584 [2024-11-06 14:18:55.120552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.584 [2024-11-06 14:18:55.141209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.584 [2024-11-06 14:18:55.141263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.584 [2024-11-06 14:18:55.161060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.584 [2024-11-06 14:18:55.161101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.584 [2024-11-06 14:18:55.177857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.584 [2024-11-06 14:18:55.177918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.584 11814.00 IOPS, 92.30 MiB/s [2024-11-06T14:18:55.219Z] [2024-11-06 14:18:55.194690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.584 [2024-11-06 14:18:55.194732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.584 [2024-11-06 14:18:55.210735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.584 [2024-11-06 14:18:55.210781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.843 [2024-11-06 14:18:55.228284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.843 [2024-11-06 14:18:55.228340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.843 [2024-11-06 14:18:55.243926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.843 [2024-11-06 14:18:55.243967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.843 [2024-11-06 14:18:55.266149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.843 [2024-11-06 14:18:55.266185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.843 [2024-11-06 14:18:55.286009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.843 [2024-11-06 14:18:55.286052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.843 [2024-11-06 14:18:55.305697] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.844 [2024-11-06 14:18:55.305737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.844 [2024-11-06 14:18:55.327426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.844 [2024-11-06 14:18:55.327470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.844 [2024-11-06 14:18:55.343238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.844 [2024-11-06 14:18:55.343276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.844 [2024-11-06 14:18:55.361407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.844 [2024-11-06 14:18:55.361450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.844 [2024-11-06 14:18:55.376928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.844 [2024-11-06 14:18:55.376989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.844 [2024-11-06 14:18:55.395380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.844 [2024-11-06 14:18:55.395423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.844 [2024-11-06 14:18:55.415945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.844 [2024-11-06 14:18:55.415983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.844 [2024-11-06 14:18:55.430983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.844 [2024-11-06 14:18:55.431047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.844 [2024-11-06 14:18:55.448120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.844 [2024-11-06 14:18:55.448160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:27.844 [2024-11-06 14:18:55.465436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:27.844 [2024-11-06 14:18:55.465482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.135 [2024-11-06 14:18:55.480110] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.135 [2024-11-06 14:18:55.480182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.135 [2024-11-06 14:18:55.497461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.135 [2024-11-06 14:18:55.497504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.135 [2024-11-06 14:18:55.512058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.135 [2024-11-06 14:18:55.512096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.135 [2024-11-06 14:18:55.533152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.135 [2024-11-06 14:18:55.533189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.135 [2024-11-06 14:18:55.552903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.135 [2024-11-06 14:18:55.552942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.135 [2024-11-06 14:18:55.574135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.135 [2024-11-06 14:18:55.574175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.135 [2024-11-06 14:18:55.595810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.135 [2024-11-06 14:18:55.595864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.135 [2024-11-06 14:18:55.616447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.135 [2024-11-06 14:18:55.616486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.135 [2024-11-06 14:18:55.636643] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.135 [2024-11-06 14:18:55.636683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.135 [2024-11-06 14:18:55.656846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.135 [2024-11-06 14:18:55.656895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.135 [2024-11-06 14:18:55.677208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.135 [2024-11-06 14:18:55.677246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.135 [2024-11-06 14:18:55.696929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.135 [2024-11-06 14:18:55.696966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.135 [2024-11-06 14:18:55.714649] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.135 [2024-11-06 14:18:55.714689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.135 [2024-11-06 14:18:55.736031] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.135 [2024-11-06 14:18:55.736072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.399 [2024-11-06 14:18:55.756173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.399 [2024-11-06 14:18:55.756214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.399 [2024-11-06 14:18:55.777091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.399 [2024-11-06 14:18:55.777132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.399 [2024-11-06 14:18:55.797013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.399 [2024-11-06 14:18:55.797051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.399 [2024-11-06 14:18:55.818023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.399 [2024-11-06 14:18:55.818061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.399 [2024-11-06 14:18:55.837519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.399 [2024-11-06 14:18:55.837559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.399 [2024-11-06 14:18:55.859124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.399 [2024-11-06 14:18:55.859166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.399 [2024-11-06 14:18:55.873446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.399 [2024-11-06 14:18:55.873485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.399 [2024-11-06 14:18:55.890604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.399 [2024-11-06 14:18:55.890659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.399 [2024-11-06 14:18:55.907566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.399 [2024-11-06 14:18:55.907606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.399 [2024-11-06 14:18:55.923194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.399 [2024-11-06 14:18:55.923232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.399 [2024-11-06 14:18:55.933093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.399 [2024-11-06 14:18:55.933144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.399 [2024-11-06 14:18:55.947991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.399 [2024-11-06 14:18:55.948029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.399 [2024-11-06 14:18:55.965240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.399 [2024-11-06 14:18:55.965278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.399 [2024-11-06 14:18:55.982309] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.399 [2024-11-06 14:18:55.982357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.399 [2024-11-06 14:18:55.998909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.399 [2024-11-06 14:18:55.998948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.399 [2024-11-06 14:18:56.015877] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.399 [2024-11-06 14:18:56.015927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.399 [2024-11-06 14:18:56.033030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.399 [2024-11-06 14:18:56.033068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.659 [2024-11-06 14:18:56.054045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.659 [2024-11-06 14:18:56.054083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.659 [2024-11-06 14:18:56.073449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.659 [2024-11-06 14:18:56.073488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.659 [2024-11-06 14:18:56.090238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.659 [2024-11-06 14:18:56.090276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.659 [2024-11-06 14:18:56.107057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.659 [2024-11-06 14:18:56.107094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.659 [2024-11-06 14:18:56.123549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.659 [2024-11-06 14:18:56.123589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.659 [2024-11-06 14:18:56.144785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.659 [2024-11-06 14:18:56.144830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.659 [2024-11-06 14:18:56.159552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.659 [2024-11-06 14:18:56.159597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.659 11898.00 IOPS, 92.95 MiB/s [2024-11-06T14:18:56.294Z] [2024-11-06 14:18:56.179690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.659 [2024-11-06 14:18:56.179733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.659 [2024-11-06 14:18:56.196853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.659 [2024-11-06 14:18:56.196897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.659 [2024-11-06 14:18:56.214587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.659 [2024-11-06 14:18:56.214631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.659 [2024-11-06 14:18:56.235390] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.659 [2024-11-06 14:18:56.235454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.659 [2024-11-06 14:18:56.250543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.659 [2024-11-06 14:18:56.250590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.659 [2024-11-06 14:18:56.267798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.659 [2024-11-06 14:18:56.267857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.659 [2024-11-06 14:18:56.284171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.659 [2024-11-06 14:18:56.284213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.918 [2024-11-06 14:18:56.300830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.918 [2024-11-06 14:18:56.300883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.918 [2024-11-06 14:18:56.318776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.918 [2024-11-06 14:18:56.318816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.918 [2024-11-06 14:18:56.333772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.918 [2024-11-06 14:18:56.333823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.918 [2024-11-06 14:18:56.344013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.918 [2024-11-06 14:18:56.344050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.918 [2024-11-06 14:18:56.359112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.918 [2024-11-06 14:18:56.359150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.918 [2024-11-06 14:18:56.375682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.918 [2024-11-06 14:18:56.375719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.918 [2024-11-06 14:18:56.397037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.918 [2024-11-06 14:18:56.397075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.918 [2024-11-06 14:18:56.417133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.918 [2024-11-06 14:18:56.417186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.918 [2024-11-06 14:18:56.436873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.918 [2024-11-06 14:18:56.436927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.918 [2024-11-06 14:18:56.458553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.918 [2024-11-06 14:18:56.458591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.918 [2024-11-06 14:18:56.475331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.918 [2024-11-06 14:18:56.475370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.918 [2024-11-06 14:18:56.497251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.918 [2024-11-06 14:18:56.497313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.918 [2024-11-06 14:18:56.511979] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.918 [2024-11-06 14:18:56.512018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.918 [2024-11-06 14:18:56.522283] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.918 [2024-11-06 14:18:56.522330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:28.918 [2024-11-06 14:18:56.537323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:28.918 [2024-11-06 14:18:56.537362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.178 [2024-11-06 14:18:56.553489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.178 [2024-11-06 14:18:56.553529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.178 [2024-11-06 14:18:56.570396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.178 [2024-11-06 14:18:56.570436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.178 [2024-11-06 14:18:56.592030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.178 [2024-11-06 14:18:56.592068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.178 [2024-11-06 14:18:56.612283] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.178 [2024-11-06 14:18:56.612322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.178 [2024-11-06 14:18:56.633491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.178 [2024-11-06 14:18:56.633530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.178 [2024-11-06 14:18:56.653519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.178 [2024-11-06 14:18:56.653587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.178 [2024-11-06 14:18:56.675282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.178 [2024-11-06 14:18:56.675321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.178 [2024-11-06 14:18:56.695426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.178 [2024-11-06 14:18:56.695477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.178 [2024-11-06 14:18:56.717104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.178 [2024-11-06 14:18:56.717143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.178 [2024-11-06 14:18:56.737052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.178 [2024-11-06 14:18:56.737094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.178 [2024-11-06 14:18:56.752471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.178 [2024-11-06 14:18:56.752510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.178 [2024-11-06 14:18:56.770804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.178 [2024-11-06 14:18:56.770871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.178 [2024-11-06 14:18:56.791235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.178 [2024-11-06 14:18:56.791275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.178 [2024-11-06 14:18:56.811225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.178 [2024-11-06 14:18:56.811267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.437 [2024-11-06 14:18:56.832645] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.437 [2024-11-06 14:18:56.832687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.438 [2024-11-06 14:18:56.852738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.438 [2024-11-06 14:18:56.852778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.438 [2024-11-06 14:18:56.872949] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.438 [2024-11-06 14:18:56.872987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.438 [2024-11-06 14:18:56.894046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.438 [2024-11-06 14:18:56.894085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.438 [2024-11-06 14:18:56.914680] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.438 [2024-11-06 14:18:56.914718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.438 [2024-11-06 14:18:56.934373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.438 [2024-11-06 14:18:56.934430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.438 [2024-11-06 14:18:56.954604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.438 [2024-11-06 14:18:56.954642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.438 [2024-11-06 14:18:56.975932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.438 [2024-11-06 14:18:56.975986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.438 [2024-11-06 14:18:56.996633] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.438 [2024-11-06 14:18:56.996673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.438 [2024-11-06 14:18:57.016408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.438 [2024-11-06 14:18:57.016450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.438 [2024-11-06 14:18:57.034257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.438 [2024-11-06 14:18:57.034302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.438 [2024-11-06 14:18:57.049822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.438 [2024-11-06 14:18:57.049873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.438 [2024-11-06 14:18:57.067517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.438 [2024-11-06 14:18:57.067556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.696 [2024-11-06 14:18:57.082201] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.696 [2024-11-06 14:18:57.082240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.696 [2024-11-06 14:18:57.098576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.696 [2024-11-06 14:18:57.098632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.696 [2024-11-06 14:18:57.114745] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.696 [2024-11-06 14:18:57.114783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.696 [2024-11-06 14:18:57.132143] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.696 [2024-11-06 14:18:57.132183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.696 [2024-11-06 14:18:57.149804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.696 [2024-11-06 14:18:57.149855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.696 [2024-11-06 14:18:57.164524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.696 [2024-11-06 14:18:57.164563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.696 11871.00 IOPS, 92.74 MiB/s [2024-11-06T14:18:57.331Z] [2024-11-06 14:18:57.181514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.696 [2024-11-06 14:18:57.181552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.696 [2024-11-06 14:18:57.197621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.696 [2024-11-06 14:18:57.197675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.696 [2024-11-06 14:18:57.216015] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.696 [2024-11-06 14:18:57.216054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.696 [2024-11-06 14:18:57.237570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.697 [2024-11-06 14:18:57.237612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.697 [2024-11-06 14:18:57.256987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.697 [2024-11-06 14:18:57.257027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.697 [2024-11-06 14:18:57.277405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.697 [2024-11-06 14:18:57.277452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.697 [2024-11-06 14:18:57.295298] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.697 [2024-11-06 14:18:57.295341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.697 [2024-11-06 14:18:57.311192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.697 [2024-11-06 14:18:57.311234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.697 [2024-11-06 14:18:57.329107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.697 [2024-11-06 14:18:57.329154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.955 [2024-11-06 14:18:57.349543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.955 [2024-11-06 14:18:57.349587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.955 [2024-11-06 14:18:57.370700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.955 [2024-11-06 14:18:57.370743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.955 [2024-11-06 14:18:57.385675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.955 [2024-11-06 14:18:57.385716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.955 [2024-11-06 14:18:57.402914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.955 [2024-11-06 14:18:57.402956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.955 [2024-11-06 14:18:57.419249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.955 [2024-11-06 14:18:57.419291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.955 [2024-11-06 14:18:57.437135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.955 [2024-11-06 14:18:57.437174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.955 [2024-11-06 14:18:57.451999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.955 [2024-11-06 14:18:57.452039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.955 [2024-11-06 14:18:57.461448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.955 [2024-11-06 14:18:57.461499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.955 [2024-11-06 14:18:57.482909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.955 [2024-11-06 14:18:57.482951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.955 [2024-11-06 14:18:57.503965] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.955 [2024-11-06 14:18:57.504007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.955 [2024-11-06 14:18:57.524112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.955 [2024-11-06 14:18:57.524153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.955 [2024-11-06 14:18:57.544777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.955 [2024-11-06 14:18:57.544818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.955 [2024-11-06 14:18:57.564849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.955 [2024-11-06 14:18:57.564887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:29.955 [2024-11-06 14:18:57.585458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:29.955 [2024-11-06 14:18:57.585498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.215 [2024-11-06 14:18:57.609075] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.215 [2024-11-06 14:18:57.609116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.215 [2024-11-06 14:18:57.630247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.215 [2024-11-06 14:18:57.630306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.215 [2024-11-06 14:18:57.646089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.215 [2024-11-06 14:18:57.646128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.215 [2024-11-06 14:18:57.663833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.215 [2024-11-06 14:18:57.663892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.215 [2024-11-06 14:18:57.678047] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.215 [2024-11-06 14:18:57.678085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.215 [2024-11-06 14:18:57.693770] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.215 [2024-11-06 14:18:57.693810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.215 [2024-11-06 14:18:57.710583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.215 [2024-11-06 14:18:57.710623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.215 [2024-11-06 14:18:57.727684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.215 [2024-11-06 14:18:57.727724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.215 [2024-11-06 14:18:57.744389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.215 [2024-11-06 14:18:57.744430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.215 [2024-11-06 14:18:57.762399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.215 [2024-11-06 14:18:57.762438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.215 [2024-11-06 14:18:57.776896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.215 [2024-11-06 14:18:57.776947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.215 [2024-11-06 14:18:57.792305] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.215 [2024-11-06 14:18:57.792343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.215 [2024-11-06 14:18:57.801596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.215 [2024-11-06 14:18:57.801633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.215 [2024-11-06 14:18:57.816585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.215 [2024-11-06 14:18:57.816622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.215 [2024-11-06 14:18:57.831606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.215 [2024-11-06 14:18:57.831657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.473 [2024-11-06 14:18:57.848930] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.473 [2024-11-06 14:18:57.848999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.473 [2024-11-06 14:18:57.863546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.473 [2024-11-06 14:18:57.863600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.473 [2024-11-06 14:18:57.880427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.473 [2024-11-06 14:18:57.880494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.473 [2024-11-06 14:18:57.896893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.473 [2024-11-06 14:18:57.896941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.473 [2024-11-06 14:18:57.918492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.473 [2024-11-06 14:18:57.918529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.473 [2024-11-06 14:18:57.938490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.473 [2024-11-06 14:18:57.938529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.473 [2024-11-06 14:18:57.954098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.473 [2024-11-06 14:18:57.954136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.473 [2024-11-06 14:18:57.972331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.473 [2024-11-06 14:18:57.972369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.473 [2024-11-06 14:18:57.993594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.473 [2024-11-06 14:18:57.993633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.473 [2024-11-06 14:18:58.014319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.473 [2024-11-06 14:18:58.014376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.473 [2024-11-06 14:18:58.035806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.473 [2024-11-06 14:18:58.035862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.474 [2024-11-06 14:18:58.056603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.474 [2024-11-06 14:18:58.056644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.474 [2024-11-06 14:18:58.078130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.474 [2024-11-06 14:18:58.078170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.474 [2024-11-06 14:18:58.097789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.474 [2024-11-06 14:18:58.097830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.732 [2024-11-06 14:18:58.118488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.732 [2024-11-06 14:18:58.118528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.732 [2024-11-06 14:18:58.139873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.732 [2024-11-06 14:18:58.139923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.732 [2024-11-06 14:18:58.160273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.732 [2024-11-06 14:18:58.160315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.732 11873.75 IOPS, 92.76 MiB/s [2024-11-06T14:18:58.367Z] [2024-11-06 14:18:58.181195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.732 [2024-11-06 14:18:58.181249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.732 [2024-11-06 14:18:58.201719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.732 [2024-11-06 14:18:58.201760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.732 [2024-11-06 14:18:58.218562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.732 [2024-11-06 14:18:58.218601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.732 [2024-11-06 14:18:58.239453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.732 [2024-11-06 14:18:58.239494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.732 [2024-11-06 14:18:58.260583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.732 [2024-11-06 14:18:58.260622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.732 [2024-11-06 14:18:58.280601] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.732 [2024-11-06 14:18:58.280642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.732 [2024-11-06 14:18:58.298366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.732 [2024-11-06 14:18:58.298425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.732 [2024-11-06 14:18:58.315226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.732 [2024-11-06 14:18:58.315271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.732 [2024-11-06 14:18:58.332468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.732 [2024-11-06 14:18:58.332513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.732 [2024-11-06 14:18:58.348621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.732 [2024-11-06 14:18:58.348663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.732 [2024-11-06 14:18:58.364728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.732 [2024-11-06 14:18:58.364770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.990 [2024-11-06 14:18:58.386011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.991 [2024-11-06 14:18:58.386054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.991 [2024-11-06 14:18:58.401999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.991 [2024-11-06 14:18:58.402035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.991 [2024-11-06 14:18:58.423733] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.991 [2024-11-06 14:18:58.423773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.991 [2024-11-06 14:18:58.445125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.991 [2024-11-06 14:18:58.445177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.991 [2024-11-06 14:18:58.459858] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.991 [2024-11-06 14:18:58.459920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.991 [2024-11-06 14:18:58.477223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.991 [2024-11-06 14:18:58.477264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.991 [2024-11-06 14:18:58.497732] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.991 [2024-11-06 14:18:58.497779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.991 [2024-11-06 14:18:58.518539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.991 [2024-11-06 14:18:58.518583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.991 [2024-11-06 14:18:58.533416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.991 [2024-11-06 14:18:58.533458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.991 [2024-11-06 14:18:58.555580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.991 [2024-11-06 14:18:58.555627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.991 [2024-11-06 14:18:58.570442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.991 [2024-11-06 14:18:58.570484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.991 [2024-11-06 14:18:58.580591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.991 [2024-11-06 14:18:58.580630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.991 [2024-11-06 14:18:58.595758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.991 [2024-11-06 14:18:58.595799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:30.991 [2024-11-06 14:18:58.612736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:30.991 [2024-11-06 14:18:58.612777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.250 [2024-11-06 14:18:58.629551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.250 [2024-11-06 14:18:58.629593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.250 [2024-11-06 14:18:58.647999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.250 [2024-11-06 14:18:58.648038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.250 [2024-11-06 14:18:58.662630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.250 [2024-11-06 14:18:58.662671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.250 [2024-11-06 14:18:58.678251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.250 [2024-11-06 14:18:58.678289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.250 [2024-11-06 14:18:58.687461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.250 [2024-11-06 14:18:58.687498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.250 [2024-11-06 14:18:58.704134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.250 [2024-11-06 14:18:58.704172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.250 [2024-11-06 14:18:58.720218] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.250 [2024-11-06 14:18:58.720256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.250 [2024-11-06 14:18:58.737327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.250 [2024-11-06 14:18:58.737367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.250 [2024-11-06 14:18:58.754551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.250 [2024-11-06 14:18:58.754591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.250 [2024-11-06 14:18:58.764896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.250 [2024-11-06 14:18:58.764960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.250 [2024-11-06 14:18:58.775771] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.250 [2024-11-06 14:18:58.775810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.250 [2024-11-06 14:18:58.787693] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.250 [2024-11-06 14:18:58.787731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.250 [2024-11-06 14:18:58.797012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.250 [2024-11-06 14:18:58.797047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.250 [2024-11-06 14:18:58.811296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.250 [2024-11-06 14:18:58.811335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.250 [2024-11-06 14:18:58.821204] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.250 [2024-11-06 14:18:58.821240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.250 [2024-11-06 14:18:58.837629] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.250 [2024-11-06 14:18:58.837668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.250 [2024-11-06 14:18:58.855333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.250 [2024-11-06 14:18:58.855376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.250 [2024-11-06 14:18:58.870368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.250 [2024-11-06 14:18:58.870422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.509 [2024-11-06 14:18:58.889767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.509 [2024-11-06 14:18:58.889806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.509 [2024-11-06 14:18:58.909755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.509 [2024-11-06 14:18:58.909795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.509 [2024-11-06 14:18:58.925575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.509 [2024-11-06 14:18:58.925626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.509 [2024-11-06 14:18:58.944233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.509 [2024-11-06 14:18:58.944270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.509 [2024-11-06 14:18:58.959183] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.509 [2024-11-06 14:18:58.959222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.509 [2024-11-06 14:18:58.980639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.509 [2024-11-06 14:18:58.980679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.509 [2024-11-06 14:18:58.998030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.509 [2024-11-06 14:18:58.998067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.509 [2024-11-06 14:18:59.012562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.509 [2024-11-06 14:18:59.012600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.509 [2024-11-06 14:18:59.028308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.509 [2024-11-06 14:18:59.028362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.509 [2024-11-06 14:18:59.046496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.509 [2024-11-06 14:18:59.046532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.509 [2024-11-06 14:18:59.066906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.509 [2024-11-06 14:18:59.066944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.509 [2024-11-06 14:18:59.082193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.509 [2024-11-06 14:18:59.082232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.509 [2024-11-06 14:18:59.102672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.509 [2024-11-06 14:18:59.102730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.509 [2024-11-06 14:18:59.119557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.509 [2024-11-06 14:18:59.119596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.509 [2024-11-06 14:18:59.136210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.509 [2024-11-06 14:18:59.136249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.769 [2024-11-06 14:18:59.157169] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.769 [2024-11-06 14:18:59.157225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.769 11860.00 IOPS, 92.66 MiB/s [2024-11-06T14:18:59.404Z] [2024-11-06 14:18:59.177169] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.769 [2024-11-06 14:18:59.177206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.769 00:14:31.769 Latency(us) 00:14:31.769 [2024-11-06T14:18:59.404Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:31.769 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:14:31.769 Nvme1n1 : 5.01 11862.82 92.68 0.00 0.00 10778.17 2750.41 17792.10 00:14:31.769 [2024-11-06T14:18:59.404Z] =================================================================================================================== 00:14:31.769 [2024-11-06T14:18:59.404Z] Total : 11862.82 92.68 0.00 0.00 10778.17 2750.41 17792.10 00:14:31.769 [2024-11-06 14:18:59.192659] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.769 [2024-11-06 14:18:59.192697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.769 [2024-11-06 14:18:59.208624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.769 [2024-11-06 14:18:59.208659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.769 [2024-11-06 14:18:59.224640] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.769 [2024-11-06 14:18:59.224690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.769 [2024-11-06 14:18:59.240713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.769 [2024-11-06 14:18:59.240780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.769 [2024-11-06 14:18:59.256587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.769 [2024-11-06 14:18:59.256630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.769 [2024-11-06 14:18:59.272620] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.769 [2024-11-06 14:18:59.272667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.769 [2024-11-06 14:18:59.288546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.769 [2024-11-06 14:18:59.288581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.769 [2024-11-06 14:18:59.304490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.769 [2024-11-06 14:18:59.304526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.769 [2024-11-06 14:18:59.320512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.769 [2024-11-06 14:18:59.320548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.769 [2024-11-06 14:18:59.336555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.769 [2024-11-06 14:18:59.336617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.769 [2024-11-06 14:18:59.352546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.769 [2024-11-06 14:18:59.352590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.769 [2024-11-06 14:18:59.368423] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.769 [2024-11-06 14:18:59.368460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.769 [2024-11-06 14:18:59.384392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.769 [2024-11-06 14:18:59.384427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.769 [2024-11-06 14:18:59.400390] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:31.769 [2024-11-06 14:18:59.400425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.028 [2024-11-06 14:18:59.416389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.028 [2024-11-06 14:18:59.416422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.028 [2024-11-06 14:18:59.432352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.028 [2024-11-06 14:18:59.432385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.028 [2024-11-06 14:18:59.448340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.028 [2024-11-06 14:18:59.448370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.028 [2024-11-06 14:18:59.464305] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.028 [2024-11-06 14:18:59.464348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.028 [2024-11-06 14:18:59.476305] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.028 [2024-11-06 14:18:59.476337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.028 [2024-11-06 14:18:59.488270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.028 [2024-11-06 14:18:59.488301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.028 [2024-11-06 14:18:59.500258] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.028 [2024-11-06 14:18:59.500288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.028 [2024-11-06 14:18:59.512251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.028 [2024-11-06 14:18:59.512282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.028 [2024-11-06 14:18:59.524295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.028 [2024-11-06 14:18:59.524326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.028 [2024-11-06 14:18:59.536293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.028 [2024-11-06 14:18:59.536333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.028 [2024-11-06 14:18:59.548258] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.028 [2024-11-06 14:18:59.548291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.028 [2024-11-06 14:18:59.560203] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.028 [2024-11-06 14:18:59.560235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.028 [2024-11-06 14:18:59.572192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.028 [2024-11-06 14:18:59.572225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.028 [2024-11-06 14:18:59.584163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.028 [2024-11-06 14:18:59.584212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.028 [2024-11-06 14:18:59.596150] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.028 [2024-11-06 14:18:59.596184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.028 [2024-11-06 14:18:59.608236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.028 [2024-11-06 14:18:59.608292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.028 [2024-11-06 14:18:59.620235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.028 [2024-11-06 14:18:59.620302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.028 [2024-11-06 14:18:59.632193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.028 [2024-11-06 14:18:59.632242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.028 [2024-11-06 14:18:59.644139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.028 [2024-11-06 14:18:59.644172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.028 [2024-11-06 14:18:59.656087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.028 [2024-11-06 14:18:59.656120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.287 [2024-11-06 14:18:59.668081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.287 [2024-11-06 14:18:59.668113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.287 [2024-11-06 14:18:59.680073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.287 [2024-11-06 14:18:59.680107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.287 [2024-11-06 14:18:59.692030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.287 [2024-11-06 14:18:59.692063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.287 [2024-11-06 14:18:59.704048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.287 [2024-11-06 14:18:59.704079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.287 [2024-11-06 14:18:59.716029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.287 [2024-11-06 14:18:59.716061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.287 [2024-11-06 14:18:59.727993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.287 [2024-11-06 14:18:59.728025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.287 [2024-11-06 14:18:59.739997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.287 [2024-11-06 14:18:59.740029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.287 [2024-11-06 14:18:59.751983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.287 [2024-11-06 14:18:59.752015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.287 [2024-11-06 14:18:59.764036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.287 [2024-11-06 14:18:59.764068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.287 [2024-11-06 14:18:59.775991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.287 [2024-11-06 14:18:59.776024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.287 [2024-11-06 14:18:59.791938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.287 [2024-11-06 14:18:59.791970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.287 [2024-11-06 14:18:59.808047] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.287 [2024-11-06 14:18:59.808101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.287 [2024-11-06 14:18:59.820031] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.287 [2024-11-06 14:18:59.820067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.287 [2024-11-06 14:18:59.831900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.287 [2024-11-06 14:18:59.831946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.287 [2024-11-06 14:18:59.843923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.288 [2024-11-06 14:18:59.843954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.288 [2024-11-06 14:18:59.855871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.288 [2024-11-06 14:18:59.855903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.288 [2024-11-06 14:18:59.867869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.288 [2024-11-06 14:18:59.867900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.288 [2024-11-06 14:18:59.879855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.288 [2024-11-06 14:18:59.879900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.288 [2024-11-06 14:18:59.891814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.288 [2024-11-06 14:18:59.891857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.288 [2024-11-06 14:18:59.903831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.288 [2024-11-06 14:18:59.903878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.288 [2024-11-06 14:18:59.915828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.288 [2024-11-06 14:18:59.915875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.546 [2024-11-06 14:18:59.927782] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.546 [2024-11-06 14:18:59.927814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.546 [2024-11-06 14:18:59.939769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.546 [2024-11-06 14:18:59.939801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.546 [2024-11-06 14:18:59.951731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.546 [2024-11-06 14:18:59.951763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.546 [2024-11-06 14:18:59.963757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.546 [2024-11-06 14:18:59.963789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.546 [2024-11-06 14:18:59.975718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.546 [2024-11-06 14:18:59.975750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.546 [2024-11-06 14:18:59.987681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.546 [2024-11-06 14:18:59.987713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.546 [2024-11-06 14:18:59.999676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.546 [2024-11-06 14:18:59.999707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.546 [2024-11-06 14:19:00.011695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.546 [2024-11-06 14:19:00.011733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.546 [2024-11-06 14:19:00.027687] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.546 [2024-11-06 14:19:00.027730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.546 [2024-11-06 14:19:00.043659] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.546 [2024-11-06 14:19:00.043694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.546 [2024-11-06 14:19:00.055611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.546 [2024-11-06 14:19:00.055643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.546 [2024-11-06 14:19:00.067639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.547 [2024-11-06 14:19:00.067672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.547 [2024-11-06 14:19:00.079599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.547 [2024-11-06 14:19:00.079631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.547 [2024-11-06 14:19:00.091561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.547 [2024-11-06 14:19:00.091591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.547 [2024-11-06 14:19:00.103563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.547 [2024-11-06 14:19:00.103596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.547 [2024-11-06 14:19:00.115586] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.547 [2024-11-06 14:19:00.115621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.547 [2024-11-06 14:19:00.127567] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.547 [2024-11-06 14:19:00.127602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.547 [2024-11-06 14:19:00.139606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.547 [2024-11-06 14:19:00.139638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.547 [2024-11-06 14:19:00.151508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.547 [2024-11-06 14:19:00.151540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.547 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (68794) - No such process 00:14:32.547 14:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 68794 00:14:32.547 14:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:32.547 14:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.547 14:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:32.547 14:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.547 14:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:32.547 14:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.547 14:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:32.547 delay0 00:14:32.547 14:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.547 14:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:14:32.805 14:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.805 14:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:32.805 14:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.805 14:19:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:14:33.064 [2024-11-06 14:19:00.443782] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:14:39.626 Initializing NVMe Controllers 00:14:39.626 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:14:39.626 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:39.626 Initialization complete. Launching workers. 00:14:39.626 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 87 00:14:39.626 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 374, failed to submit 33 00:14:39.626 success 250, unsuccessful 124, failed 0 00:14:39.626 14:19:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:14:39.626 14:19:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:14:39.626 14:19:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:39.626 14:19:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:14:39.626 14:19:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:39.626 14:19:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:14:39.626 14:19:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:39.626 14:19:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:39.626 rmmod nvme_tcp 00:14:39.626 rmmod nvme_fabrics 00:14:39.626 rmmod nvme_keyring 00:14:39.626 14:19:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:39.626 14:19:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:14:39.626 14:19:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:14:39.626 14:19:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 68626 ']' 00:14:39.626 14:19:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 68626 00:14:39.626 14:19:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' -z 68626 ']' 00:14:39.626 14:19:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # kill -0 68626 00:14:39.626 14:19:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # uname 00:14:39.626 14:19:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:39.626 14:19:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 68626 00:14:39.626 14:19:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:14:39.626 14:19:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:14:39.626 killing process with pid 68626 00:14:39.626 14:19:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@970 -- # echo 'killing process with pid 68626' 00:14:39.626 14:19:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@971 -- # kill 68626 00:14:39.626 14:19:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@976 -- # wait 68626 00:14:40.561 14:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:40.562 14:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:40.562 14:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:40.562 14:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:14:40.562 14:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:14:40.562 14:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:14:40.562 14:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:40.562 14:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:40.562 14:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:40.562 14:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:40.562 14:19:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:40.562 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:40.562 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:40.562 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:40.562 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:40.562 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:40.562 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:40.562 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:40.562 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:40.562 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:40.562 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:40.821 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:40.821 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:40.821 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:40.821 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:40.821 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:40.821 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:14:40.821 00:14:40.821 real 0m29.263s 00:14:40.821 user 0m46.384s 00:14:40.821 sys 0m9.059s 00:14:40.821 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:40.821 ************************************ 00:14:40.821 END TEST nvmf_zcopy 00:14:40.821 ************************************ 00:14:40.821 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:40.821 14:19:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:14:40.821 14:19:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:40.821 14:19:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:40.821 14:19:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:40.821 ************************************ 00:14:40.821 START TEST nvmf_nmic 00:14:40.821 ************************************ 00:14:40.821 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:14:41.081 * Looking for test storage... 00:14:41.081 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:41.081 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:41.081 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:14:41.081 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:41.081 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:41.081 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:41.081 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:41.081 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:41.081 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:14:41.081 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:14:41.081 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:14:41.081 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:14:41.081 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:14:41.081 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:14:41.081 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:14:41.081 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:41.081 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:14:41.081 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:14:41.081 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:41.081 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:41.081 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:14:41.081 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:14:41.081 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:41.081 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:14:41.081 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:14:41.081 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:14:41.081 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:14:41.081 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:41.081 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:14:41.081 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:14:41.081 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:41.081 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:41.081 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:14:41.081 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:41.081 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:41.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:41.081 --rc genhtml_branch_coverage=1 00:14:41.081 --rc genhtml_function_coverage=1 00:14:41.081 --rc genhtml_legend=1 00:14:41.081 --rc geninfo_all_blocks=1 00:14:41.081 --rc geninfo_unexecuted_blocks=1 00:14:41.081 00:14:41.081 ' 00:14:41.081 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:41.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:41.081 --rc genhtml_branch_coverage=1 00:14:41.082 --rc genhtml_function_coverage=1 00:14:41.082 --rc genhtml_legend=1 00:14:41.082 --rc geninfo_all_blocks=1 00:14:41.082 --rc geninfo_unexecuted_blocks=1 00:14:41.082 00:14:41.082 ' 00:14:41.082 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:41.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:41.082 --rc genhtml_branch_coverage=1 00:14:41.082 --rc genhtml_function_coverage=1 00:14:41.082 --rc genhtml_legend=1 00:14:41.082 --rc geninfo_all_blocks=1 00:14:41.082 --rc geninfo_unexecuted_blocks=1 00:14:41.082 00:14:41.082 ' 00:14:41.082 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:41.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:41.082 --rc genhtml_branch_coverage=1 00:14:41.082 --rc genhtml_function_coverage=1 00:14:41.082 --rc genhtml_legend=1 00:14:41.082 --rc geninfo_all_blocks=1 00:14:41.082 --rc geninfo_unexecuted_blocks=1 00:14:41.082 00:14:41.082 ' 00:14:41.082 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:41.082 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:14:41.082 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:41.082 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:41.082 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:41.082 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:41.082 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:41.082 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:41.082 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:41.082 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:41.082 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:41.082 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:41.082 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:14:41.082 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:14:41.082 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:41.082 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:41.082 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:41.082 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:41.082 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:41.082 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:14:41.082 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:41.082 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:41.082 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:41.082 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.082 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.082 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.082 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:14:41.082 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.082 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:14:41.082 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:41.082 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:41.082 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:41.082 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:41.082 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:41.082 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:41.082 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:41.082 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:41.082 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:41.082 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:41.082 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:41.082 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:41.082 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:14:41.082 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:41.082 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:41.082 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:41.082 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:41.082 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:41.082 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:41.082 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:41.082 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:41.082 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:41.082 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:41.082 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:41.082 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:41.082 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:41.082 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:41.082 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:41.082 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:41.082 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:41.082 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:41.082 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:41.082 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:41.082 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:41.082 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:41.082 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:41.082 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:41.082 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:41.082 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:41.082 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:41.082 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:41.082 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:41.082 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:41.082 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:41.082 Cannot find device "nvmf_init_br" 00:14:41.082 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:14:41.082 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:41.082 Cannot find device "nvmf_init_br2" 00:14:41.082 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:14:41.082 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:41.342 Cannot find device "nvmf_tgt_br" 00:14:41.342 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:14:41.342 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:41.342 Cannot find device "nvmf_tgt_br2" 00:14:41.342 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:14:41.342 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:41.342 Cannot find device "nvmf_init_br" 00:14:41.342 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:14:41.342 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:41.342 Cannot find device "nvmf_init_br2" 00:14:41.342 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:14:41.342 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:41.342 Cannot find device "nvmf_tgt_br" 00:14:41.342 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:14:41.342 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:41.342 Cannot find device "nvmf_tgt_br2" 00:14:41.342 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:14:41.342 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:41.342 Cannot find device "nvmf_br" 00:14:41.342 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:14:41.342 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:41.342 Cannot find device "nvmf_init_if" 00:14:41.342 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:14:41.342 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:41.342 Cannot find device "nvmf_init_if2" 00:14:41.342 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:14:41.342 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:41.342 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:41.342 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:14:41.342 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:41.342 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:41.342 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:14:41.342 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:41.342 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:41.342 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:41.342 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:41.342 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:41.342 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:41.342 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:41.342 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:41.342 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:41.342 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:41.620 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:41.620 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:41.620 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:41.620 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:41.620 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:41.620 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:41.620 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:41.620 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:41.621 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:41.621 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:41.621 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:41.621 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:41.621 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:41.621 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:41.621 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:41.621 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:41.621 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:41.621 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:41.621 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:41.621 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:41.621 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:41.621 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:41.621 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:41.621 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:41.621 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.104 ms 00:14:41.621 00:14:41.621 --- 10.0.0.3 ping statistics --- 00:14:41.621 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:41.621 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:14:41.621 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:41.621 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:41.621 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.141 ms 00:14:41.621 00:14:41.621 --- 10.0.0.4 ping statistics --- 00:14:41.621 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:41.621 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:14:41.621 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:41.621 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:41.621 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:14:41.621 00:14:41.621 --- 10.0.0.1 ping statistics --- 00:14:41.621 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:41.621 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:14:41.621 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:41.621 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:41.621 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.746 ms 00:14:41.621 00:14:41.621 --- 10.0.0.2 ping statistics --- 00:14:41.621 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:41.621 rtt min/avg/max/mdev = 0.746/0.746/0.746/0.000 ms 00:14:41.621 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:41.621 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 00:14:41.621 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:41.621 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:41.621 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:41.621 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:41.621 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:41.621 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:41.621 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:41.621 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:14:41.621 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:41.621 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:41.621 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:41.621 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=69198 00:14:41.621 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:41.621 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 69198 00:14:41.621 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@833 -- # '[' -z 69198 ']' 00:14:41.621 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:41.621 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:41.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:41.621 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:41.621 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:41.621 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:41.880 [2024-11-06 14:19:09.337856] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:14:41.880 [2024-11-06 14:19:09.338525] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:42.138 [2024-11-06 14:19:09.529234] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:42.138 [2024-11-06 14:19:09.702201] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:42.138 [2024-11-06 14:19:09.702298] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:42.138 [2024-11-06 14:19:09.702337] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:42.138 [2024-11-06 14:19:09.702360] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:42.138 [2024-11-06 14:19:09.702380] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:42.138 [2024-11-06 14:19:09.704759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:42.138 [2024-11-06 14:19:09.704901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:42.138 [2024-11-06 14:19:09.705050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:42.138 [2024-11-06 14:19:09.705080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:42.396 [2024-11-06 14:19:09.938468] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:42.654 14:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:42.654 14:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@866 -- # return 0 00:14:42.654 14:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:42.654 14:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:42.654 14:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:42.654 14:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:42.654 14:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:42.654 14:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.654 14:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:42.654 [2024-11-06 14:19:10.207156] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:42.654 14:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.654 14:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:42.654 14:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.655 14:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:42.912 Malloc0 00:14:42.912 14:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.912 14:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:42.912 14:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.912 14:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:42.912 14:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.912 14:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:42.912 14:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.912 14:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:42.912 14:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.912 14:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:42.912 14:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.912 14:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:42.912 [2024-11-06 14:19:10.346702] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:42.912 14:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.912 test case1: single bdev can't be used in multiple subsystems 00:14:42.912 14:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:14:42.912 14:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:14:42.912 14:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.912 14:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:42.913 14:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.913 14:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:14:42.913 14:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.913 14:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:42.913 14:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.913 14:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:14:42.913 14:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:14:42.913 14:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.913 14:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:42.913 [2024-11-06 14:19:10.382337] bdev.c:8198:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:14:42.913 [2024-11-06 14:19:10.382400] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:14:42.913 [2024-11-06 14:19:10.382420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.913 request: 00:14:42.913 { 00:14:42.913 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:14:42.913 "namespace": { 00:14:42.913 "bdev_name": "Malloc0", 00:14:42.913 "no_auto_visible": false 00:14:42.913 }, 00:14:42.913 "method": "nvmf_subsystem_add_ns", 00:14:42.913 "req_id": 1 00:14:42.913 } 00:14:42.913 Got JSON-RPC error response 00:14:42.913 response: 00:14:42.913 { 00:14:42.913 "code": -32602, 00:14:42.913 "message": "Invalid parameters" 00:14:42.913 } 00:14:42.913 14:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:42.913 14:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:14:42.913 14:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:14:42.913 Adding namespace failed - expected result. 00:14:42.913 14:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:14:42.913 test case2: host connect to nvmf target in multiple paths 00:14:42.913 14:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:14:42.913 14:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:14:42.913 14:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.913 14:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:42.913 [2024-11-06 14:19:10.398543] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:14:42.913 14:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.913 14:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid=406d54d0-5e94-472a-a2b3-4291f3ac81e0 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:14:42.913 14:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid=406d54d0-5e94-472a-a2b3-4291f3ac81e0 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:14:43.171 14:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:14:43.171 14:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # local i=0 00:14:43.171 14:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:14:43.171 14:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:14:43.171 14:19:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # sleep 2 00:14:45.073 14:19:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:14:45.073 14:19:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:14:45.073 14:19:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:14:45.331 14:19:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:14:45.331 14:19:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:14:45.331 14:19:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # return 0 00:14:45.331 14:19:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:14:45.331 [global] 00:14:45.331 thread=1 00:14:45.331 invalidate=1 00:14:45.331 rw=write 00:14:45.331 time_based=1 00:14:45.331 runtime=1 00:14:45.332 ioengine=libaio 00:14:45.332 direct=1 00:14:45.332 bs=4096 00:14:45.332 iodepth=1 00:14:45.332 norandommap=0 00:14:45.332 numjobs=1 00:14:45.332 00:14:45.332 verify_dump=1 00:14:45.332 verify_backlog=512 00:14:45.332 verify_state_save=0 00:14:45.332 do_verify=1 00:14:45.332 verify=crc32c-intel 00:14:45.332 [job0] 00:14:45.332 filename=/dev/nvme0n1 00:14:45.332 Could not set queue depth (nvme0n1) 00:14:45.332 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:45.332 fio-3.35 00:14:45.332 Starting 1 thread 00:14:46.717 00:14:46.717 job0: (groupid=0, jobs=1): err= 0: pid=69290: Wed Nov 6 14:19:14 2024 00:14:46.717 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:14:46.717 slat (nsec): min=9169, max=41750, avg=11087.20, stdev=2632.87 00:14:46.717 clat (usec): min=135, max=388, avg=177.84, stdev=17.98 00:14:46.717 lat (usec): min=146, max=398, avg=188.93, stdev=18.13 00:14:46.717 clat percentiles (usec): 00:14:46.717 | 1.00th=[ 145], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 165], 00:14:46.717 | 30.00th=[ 169], 40.00th=[ 174], 50.00th=[ 176], 60.00th=[ 180], 00:14:46.717 | 70.00th=[ 186], 80.00th=[ 190], 90.00th=[ 200], 95.00th=[ 208], 00:14:46.717 | 99.00th=[ 227], 99.50th=[ 241], 99.90th=[ 302], 99.95th=[ 359], 00:14:46.717 | 99.99th=[ 388] 00:14:46.717 write: IOPS=3189, BW=12.5MiB/s (13.1MB/s)(12.5MiB/1001msec); 0 zone resets 00:14:46.717 slat (usec): min=13, max=165, avg=17.98, stdev= 8.04 00:14:46.717 clat (usec): min=83, max=403, avg=111.53, stdev=17.87 00:14:46.717 lat (usec): min=98, max=467, avg=129.52, stdev=21.19 00:14:46.717 clat percentiles (usec): 00:14:46.717 | 1.00th=[ 89], 5.00th=[ 93], 10.00th=[ 95], 20.00th=[ 99], 00:14:46.717 | 30.00th=[ 103], 40.00th=[ 106], 50.00th=[ 110], 60.00th=[ 113], 00:14:46.717 | 70.00th=[ 117], 80.00th=[ 122], 90.00th=[ 131], 95.00th=[ 139], 00:14:46.718 | 99.00th=[ 159], 99.50th=[ 204], 99.90th=[ 260], 99.95th=[ 400], 00:14:46.718 | 99.99th=[ 404] 00:14:46.718 bw ( KiB/s): min=12600, max=12600, per=98.75%, avg=12600.00, stdev= 0.00, samples=1 00:14:46.718 iops : min= 3150, max= 3150, avg=3150.00, stdev= 0.00, samples=1 00:14:46.718 lat (usec) : 100=11.59%, 250=88.11%, 500=0.30% 00:14:46.718 cpu : usr=1.80%, sys=6.60%, ctx=6265, majf=0, minf=5 00:14:46.718 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:46.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:46.718 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:46.718 issued rwts: total=3072,3193,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:46.718 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:46.718 00:14:46.718 Run status group 0 (all jobs): 00:14:46.718 READ: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:14:46.718 WRITE: bw=12.5MiB/s (13.1MB/s), 12.5MiB/s-12.5MiB/s (13.1MB/s-13.1MB/s), io=12.5MiB (13.1MB), run=1001-1001msec 00:14:46.718 00:14:46.718 Disk stats (read/write): 00:14:46.718 nvme0n1: ios=2666/3072, merge=0/0, ticks=509/378, in_queue=887, util=91.38% 00:14:46.718 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:46.718 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:14:46.718 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:46.718 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1221 -- # local i=0 00:14:46.718 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:14:46.718 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:46.718 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:14:46.718 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:46.718 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1233 -- # return 0 00:14:46.718 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:14:46.718 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:14:46.718 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:46.718 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:14:46.718 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:46.718 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:14:46.718 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:46.718 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:46.718 rmmod nvme_tcp 00:14:46.718 rmmod nvme_fabrics 00:14:46.718 rmmod nvme_keyring 00:14:46.718 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:46.718 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:14:46.718 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:14:46.718 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 69198 ']' 00:14:46.718 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 69198 00:14:46.718 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' -z 69198 ']' 00:14:46.718 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # kill -0 69198 00:14:46.718 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # uname 00:14:46.718 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:46.718 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69198 00:14:46.718 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:46.718 killing process with pid 69198 00:14:46.718 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:46.718 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69198' 00:14:46.718 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@971 -- # kill 69198 00:14:46.718 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@976 -- # wait 69198 00:14:48.620 14:19:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:48.620 14:19:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:48.620 14:19:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:48.620 14:19:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:14:48.620 14:19:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:14:48.620 14:19:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:14:48.620 14:19:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:48.620 14:19:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:48.620 14:19:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:48.620 14:19:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:48.621 14:19:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:48.621 14:19:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:48.621 14:19:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:48.621 14:19:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:48.621 14:19:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:48.621 14:19:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:48.621 14:19:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:48.621 14:19:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:48.621 14:19:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:48.621 14:19:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:48.621 14:19:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:48.621 14:19:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:48.621 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:48.621 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:48.621 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:48.621 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:48.621 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:14:48.621 00:14:48.621 real 0m7.690s 00:14:48.621 user 0m21.676s 00:14:48.621 sys 0m3.195s 00:14:48.621 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:48.621 ************************************ 00:14:48.621 END TEST nvmf_nmic 00:14:48.621 ************************************ 00:14:48.621 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:48.621 14:19:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:14:48.621 14:19:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:48.621 14:19:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:48.621 14:19:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:48.621 ************************************ 00:14:48.621 START TEST nvmf_fio_target 00:14:48.621 ************************************ 00:14:48.621 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:14:48.880 * Looking for test storage... 00:14:48.880 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:48.880 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:48.880 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:14:48.880 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:48.880 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:48.880 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:48.880 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:48.880 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:48.880 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:14:48.880 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:14:48.880 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:14:48.880 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:14:48.880 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:14:48.880 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:14:48.880 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:14:48.880 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:48.880 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:14:48.880 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:14:48.880 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:48.880 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:48.880 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:14:48.880 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:14:48.880 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:48.880 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:14:48.880 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:14:48.880 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:14:48.880 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:14:48.880 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:48.880 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:14:48.880 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:14:48.880 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:48.880 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:48.880 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:14:48.880 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:48.880 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:48.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:48.880 --rc genhtml_branch_coverage=1 00:14:48.880 --rc genhtml_function_coverage=1 00:14:48.881 --rc genhtml_legend=1 00:14:48.881 --rc geninfo_all_blocks=1 00:14:48.881 --rc geninfo_unexecuted_blocks=1 00:14:48.881 00:14:48.881 ' 00:14:48.881 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:48.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:48.881 --rc genhtml_branch_coverage=1 00:14:48.881 --rc genhtml_function_coverage=1 00:14:48.881 --rc genhtml_legend=1 00:14:48.881 --rc geninfo_all_blocks=1 00:14:48.881 --rc geninfo_unexecuted_blocks=1 00:14:48.881 00:14:48.881 ' 00:14:48.881 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:48.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:48.881 --rc genhtml_branch_coverage=1 00:14:48.881 --rc genhtml_function_coverage=1 00:14:48.881 --rc genhtml_legend=1 00:14:48.881 --rc geninfo_all_blocks=1 00:14:48.881 --rc geninfo_unexecuted_blocks=1 00:14:48.881 00:14:48.881 ' 00:14:48.881 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:48.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:48.881 --rc genhtml_branch_coverage=1 00:14:48.881 --rc genhtml_function_coverage=1 00:14:48.881 --rc genhtml_legend=1 00:14:48.881 --rc geninfo_all_blocks=1 00:14:48.881 --rc geninfo_unexecuted_blocks=1 00:14:48.881 00:14:48.881 ' 00:14:48.881 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:48.881 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:14:48.881 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:48.881 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:48.881 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:48.881 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:48.881 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:48.881 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:48.881 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:48.881 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:48.881 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:48.881 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:48.881 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:14:48.881 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:14:48.881 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:48.881 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:48.881 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:48.881 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:48.881 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:48.881 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:14:48.881 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:48.881 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:48.881 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:48.881 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.881 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.881 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.881 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:14:48.881 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.881 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:14:48.881 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:48.881 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:48.881 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:48.881 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:48.881 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:48.881 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:48.881 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:48.881 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:48.881 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:48.881 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:48.881 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:48.881 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:48.881 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:48.881 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:14:48.881 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:48.881 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:48.881 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:48.881 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:48.881 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:48.881 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:48.881 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:48.881 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:48.881 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:48.881 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:48.881 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:48.881 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:48.881 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:48.881 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:48.881 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:48.881 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:48.881 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:48.881 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:48.881 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:48.881 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:48.881 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:48.881 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:48.881 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:48.881 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:48.881 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:48.881 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:48.881 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:48.881 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:48.881 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:48.882 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:48.882 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:48.882 Cannot find device "nvmf_init_br" 00:14:48.882 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:14:48.882 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:48.882 Cannot find device "nvmf_init_br2" 00:14:48.882 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:14:48.882 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:48.882 Cannot find device "nvmf_tgt_br" 00:14:48.882 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:14:48.882 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:49.140 Cannot find device "nvmf_tgt_br2" 00:14:49.140 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:14:49.140 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:49.140 Cannot find device "nvmf_init_br" 00:14:49.140 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:14:49.140 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:49.140 Cannot find device "nvmf_init_br2" 00:14:49.140 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:14:49.140 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:49.140 Cannot find device "nvmf_tgt_br" 00:14:49.140 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:14:49.140 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:49.140 Cannot find device "nvmf_tgt_br2" 00:14:49.140 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:14:49.140 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:49.140 Cannot find device "nvmf_br" 00:14:49.140 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:14:49.140 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:49.140 Cannot find device "nvmf_init_if" 00:14:49.140 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:14:49.140 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:49.140 Cannot find device "nvmf_init_if2" 00:14:49.140 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:14:49.140 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:49.140 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:49.140 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:14:49.140 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:49.140 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:49.140 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:14:49.140 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:49.140 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:49.140 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:49.140 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:49.140 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:49.140 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:49.140 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:49.400 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:49.400 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:49.400 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:49.400 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:49.400 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:49.400 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:49.400 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:49.400 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:49.400 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:49.400 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:49.400 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:49.400 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:49.400 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:49.400 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:49.400 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:49.400 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:49.400 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:49.401 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:49.401 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:49.401 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:49.401 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:49.401 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:49.401 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:49.401 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:49.401 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:49.401 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:49.401 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:49.401 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.118 ms 00:14:49.401 00:14:49.401 --- 10.0.0.3 ping statistics --- 00:14:49.401 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:49.401 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:14:49.401 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:49.401 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:49.401 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.115 ms 00:14:49.401 00:14:49.401 --- 10.0.0.4 ping statistics --- 00:14:49.401 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:49.401 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:14:49.401 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:49.401 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:49.401 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:14:49.401 00:14:49.401 --- 10.0.0.1 ping statistics --- 00:14:49.401 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:49.401 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:14:49.401 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:49.401 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:49.401 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 00:14:49.401 00:14:49.401 --- 10.0.0.2 ping statistics --- 00:14:49.401 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:49.401 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:14:49.401 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:49.401 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 00:14:49.401 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:49.401 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:49.401 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:49.401 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:49.401 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:49.401 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:49.401 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:49.401 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:14:49.401 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:49.401 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:49.401 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.660 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=69542 00:14:49.660 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 69542 00:14:49.660 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:49.660 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@833 -- # '[' -z 69542 ']' 00:14:49.660 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:49.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:49.660 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:49.660 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:49.660 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:49.660 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.660 [2024-11-06 14:19:17.160668] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:14:49.660 [2024-11-06 14:19:17.160811] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:49.918 [2024-11-06 14:19:17.348335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:49.918 [2024-11-06 14:19:17.483888] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:49.918 [2024-11-06 14:19:17.483986] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:49.918 [2024-11-06 14:19:17.484004] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:49.918 [2024-11-06 14:19:17.484017] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:49.918 [2024-11-06 14:19:17.484031] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:49.918 [2024-11-06 14:19:17.486215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:49.918 [2024-11-06 14:19:17.486358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:49.918 [2024-11-06 14:19:17.486465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:49.918 [2024-11-06 14:19:17.486514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:50.176 [2024-11-06 14:19:17.720345] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:50.435 14:19:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:50.435 14:19:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@866 -- # return 0 00:14:50.435 14:19:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:50.435 14:19:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:50.435 14:19:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.693 14:19:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:50.693 14:19:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:50.693 [2024-11-06 14:19:18.280664] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:50.952 14:19:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:51.211 14:19:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:14:51.211 14:19:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:51.469 14:19:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:14:51.469 14:19:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:51.727 14:19:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:14:51.727 14:19:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:51.986 14:19:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:14:51.986 14:19:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:14:52.245 14:19:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:52.810 14:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:14:52.810 14:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:53.069 14:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:14:53.069 14:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:53.329 14:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:14:53.329 14:19:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:14:53.590 14:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:53.876 14:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:14:53.876 14:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:54.135 14:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:14:54.135 14:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:54.135 14:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:54.395 [2024-11-06 14:19:21.924902] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:54.395 14:19:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:14:54.653 14:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:14:54.911 14:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid=406d54d0-5e94-472a-a2b3-4291f3ac81e0 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:14:55.171 14:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:14:55.171 14:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # local i=0 00:14:55.171 14:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:14:55.171 14:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # [[ -n 4 ]] 00:14:55.171 14:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_device_counter=4 00:14:55.171 14:19:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # sleep 2 00:14:57.075 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:14:57.075 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:14:57.075 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:14:57.075 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # nvme_devices=4 00:14:57.075 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:14:57.075 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # return 0 00:14:57.075 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:14:57.075 [global] 00:14:57.075 thread=1 00:14:57.075 invalidate=1 00:14:57.075 rw=write 00:14:57.075 time_based=1 00:14:57.076 runtime=1 00:14:57.076 ioengine=libaio 00:14:57.076 direct=1 00:14:57.076 bs=4096 00:14:57.076 iodepth=1 00:14:57.076 norandommap=0 00:14:57.076 numjobs=1 00:14:57.076 00:14:57.076 verify_dump=1 00:14:57.076 verify_backlog=512 00:14:57.076 verify_state_save=0 00:14:57.076 do_verify=1 00:14:57.076 verify=crc32c-intel 00:14:57.076 [job0] 00:14:57.076 filename=/dev/nvme0n1 00:14:57.076 [job1] 00:14:57.076 filename=/dev/nvme0n2 00:14:57.076 [job2] 00:14:57.076 filename=/dev/nvme0n3 00:14:57.076 [job3] 00:14:57.076 filename=/dev/nvme0n4 00:14:57.334 Could not set queue depth (nvme0n1) 00:14:57.334 Could not set queue depth (nvme0n2) 00:14:57.334 Could not set queue depth (nvme0n3) 00:14:57.334 Could not set queue depth (nvme0n4) 00:14:57.334 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:57.334 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:57.334 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:57.334 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:57.334 fio-3.35 00:14:57.334 Starting 4 threads 00:14:58.709 00:14:58.709 job0: (groupid=0, jobs=1): err= 0: pid=69726: Wed Nov 6 14:19:26 2024 00:14:58.709 read: IOPS=1839, BW=7357KiB/s (7533kB/s)(7364KiB/1001msec) 00:14:58.709 slat (usec): min=7, max=540, avg=14.20, stdev=14.47 00:14:58.709 clat (usec): min=162, max=2316, avg=294.87, stdev=88.55 00:14:58.709 lat (usec): min=172, max=2342, avg=309.07, stdev=92.53 00:14:58.709 clat percentiles (usec): 00:14:58.709 | 1.00th=[ 182], 5.00th=[ 200], 10.00th=[ 215], 20.00th=[ 253], 00:14:58.709 | 30.00th=[ 269], 40.00th=[ 277], 50.00th=[ 289], 60.00th=[ 297], 00:14:58.709 | 70.00th=[ 306], 80.00th=[ 318], 90.00th=[ 343], 95.00th=[ 453], 00:14:58.709 | 99.00th=[ 562], 99.50th=[ 578], 99.90th=[ 1369], 99.95th=[ 2311], 00:14:58.709 | 99.99th=[ 2311] 00:14:58.709 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:14:58.709 slat (usec): min=11, max=141, avg=22.33, stdev=11.44 00:14:58.709 clat (usec): min=97, max=330, avg=185.15, stdev=47.63 00:14:58.709 lat (usec): min=114, max=433, avg=207.48, stdev=53.41 00:14:58.709 clat percentiles (usec): 00:14:58.709 | 1.00th=[ 111], 5.00th=[ 120], 10.00th=[ 126], 20.00th=[ 135], 00:14:58.709 | 30.00th=[ 145], 40.00th=[ 159], 50.00th=[ 192], 60.00th=[ 206], 00:14:58.709 | 70.00th=[ 219], 80.00th=[ 231], 90.00th=[ 247], 95.00th=[ 260], 00:14:58.709 | 99.00th=[ 289], 99.50th=[ 293], 99.90th=[ 322], 99.95th=[ 326], 00:14:58.709 | 99.99th=[ 330] 00:14:58.709 bw ( KiB/s): min= 8192, max= 8192, per=29.64%, avg=8192.00, stdev= 0.00, samples=1 00:14:58.709 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:14:58.709 lat (usec) : 100=0.03%, 250=57.26%, 500=40.60%, 750=2.03%, 1000=0.03% 00:14:58.709 lat (msec) : 2=0.03%, 4=0.03% 00:14:58.709 cpu : usr=1.20%, sys=6.20%, ctx=3897, majf=0, minf=11 00:14:58.709 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:58.709 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:58.709 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:58.709 issued rwts: total=1841,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:58.709 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:58.709 job1: (groupid=0, jobs=1): err= 0: pid=69727: Wed Nov 6 14:19:26 2024 00:14:58.709 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:14:58.709 slat (usec): min=9, max=123, avg=19.52, stdev= 8.12 00:14:58.709 clat (usec): min=233, max=3208, avg=325.36, stdev=92.11 00:14:58.709 lat (usec): min=264, max=3254, avg=344.88, stdev=93.75 00:14:58.709 clat percentiles (usec): 00:14:58.709 | 1.00th=[ 260], 5.00th=[ 273], 10.00th=[ 281], 20.00th=[ 289], 00:14:58.709 | 30.00th=[ 297], 40.00th=[ 306], 50.00th=[ 318], 60.00th=[ 322], 00:14:58.709 | 70.00th=[ 334], 80.00th=[ 347], 90.00th=[ 367], 95.00th=[ 400], 00:14:58.709 | 99.00th=[ 537], 99.50th=[ 578], 99.90th=[ 1369], 99.95th=[ 3195], 00:14:58.709 | 99.99th=[ 3195] 00:14:58.709 write: IOPS=1607, BW=6430KiB/s (6584kB/s)(6436KiB/1001msec); 0 zone resets 00:14:58.709 slat (usec): min=15, max=187, avg=39.03, stdev=10.42 00:14:58.709 clat (usec): min=123, max=515, avg=248.50, stdev=34.24 00:14:58.709 lat (usec): min=157, max=566, avg=287.52, stdev=37.87 00:14:58.709 clat percentiles (usec): 00:14:58.709 | 1.00th=[ 194], 5.00th=[ 208], 10.00th=[ 215], 20.00th=[ 225], 00:14:58.709 | 30.00th=[ 231], 40.00th=[ 237], 50.00th=[ 243], 60.00th=[ 251], 00:14:58.709 | 70.00th=[ 260], 80.00th=[ 269], 90.00th=[ 285], 95.00th=[ 297], 00:14:58.709 | 99.00th=[ 375], 99.50th=[ 392], 99.90th=[ 461], 99.95th=[ 515], 00:14:58.709 | 99.99th=[ 515] 00:14:58.709 bw ( KiB/s): min= 8192, max= 8192, per=29.64%, avg=8192.00, stdev= 0.00, samples=1 00:14:58.709 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:14:58.709 lat (usec) : 250=30.33%, 500=68.93%, 750=0.64%, 1000=0.03% 00:14:58.709 lat (msec) : 2=0.03%, 4=0.03% 00:14:58.709 cpu : usr=1.80%, sys=7.70%, ctx=3147, majf=0, minf=15 00:14:58.709 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:58.709 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:58.709 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:58.709 issued rwts: total=1536,1609,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:58.709 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:58.709 job2: (groupid=0, jobs=1): err= 0: pid=69728: Wed Nov 6 14:19:26 2024 00:14:58.709 read: IOPS=1325, BW=5303KiB/s (5430kB/s)(5308KiB/1001msec) 00:14:58.709 slat (usec): min=8, max=519, avg=27.75, stdev=19.95 00:14:58.709 clat (usec): min=195, max=3099, avg=399.78, stdev=120.86 00:14:58.709 lat (usec): min=204, max=3111, avg=427.53, stdev=124.48 00:14:58.709 clat percentiles (usec): 00:14:58.709 | 1.00th=[ 241], 5.00th=[ 281], 10.00th=[ 293], 20.00th=[ 310], 00:14:58.709 | 30.00th=[ 326], 40.00th=[ 351], 50.00th=[ 396], 60.00th=[ 429], 00:14:58.709 | 70.00th=[ 457], 80.00th=[ 490], 90.00th=[ 515], 95.00th=[ 537], 00:14:58.709 | 99.00th=[ 570], 99.50th=[ 594], 99.90th=[ 1778], 99.95th=[ 3097], 00:14:58.709 | 99.99th=[ 3097] 00:14:58.709 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:14:58.709 slat (usec): min=12, max=186, avg=34.73, stdev=14.34 00:14:58.709 clat (usec): min=127, max=456, avg=241.74, stdev=51.93 00:14:58.709 lat (usec): min=140, max=584, avg=276.48, stdev=60.40 00:14:58.709 clat percentiles (usec): 00:14:58.709 | 1.00th=[ 135], 5.00th=[ 151], 10.00th=[ 169], 20.00th=[ 206], 00:14:58.709 | 30.00th=[ 221], 40.00th=[ 231], 50.00th=[ 239], 60.00th=[ 249], 00:14:58.709 | 70.00th=[ 260], 80.00th=[ 277], 90.00th=[ 310], 95.00th=[ 347], 00:14:58.709 | 99.00th=[ 375], 99.50th=[ 388], 99.90th=[ 400], 99.95th=[ 457], 00:14:58.709 | 99.99th=[ 457] 00:14:58.709 bw ( KiB/s): min= 7096, max= 7096, per=25.67%, avg=7096.00, stdev= 0.00, samples=1 00:14:58.709 iops : min= 1774, max= 1774, avg=1774.00, stdev= 0.00, samples=1 00:14:58.709 lat (usec) : 250=33.39%, 500=59.34%, 750=7.20% 00:14:58.709 lat (msec) : 2=0.03%, 4=0.03% 00:14:58.709 cpu : usr=1.80%, sys=7.50%, ctx=2864, majf=0, minf=8 00:14:58.709 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:58.709 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:58.709 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:58.709 issued rwts: total=1327,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:58.709 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:58.709 job3: (groupid=0, jobs=1): err= 0: pid=69730: Wed Nov 6 14:19:26 2024 00:14:58.709 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:14:58.709 slat (nsec): min=13128, max=71912, avg=22168.01, stdev=6976.63 00:14:58.709 clat (usec): min=199, max=679, avg=314.85, stdev=40.38 00:14:58.709 lat (usec): min=218, max=697, avg=337.02, stdev=42.54 00:14:58.709 clat percentiles (usec): 00:14:58.709 | 1.00th=[ 241], 5.00th=[ 265], 10.00th=[ 273], 20.00th=[ 285], 00:14:58.709 | 30.00th=[ 293], 40.00th=[ 302], 50.00th=[ 310], 60.00th=[ 318], 00:14:58.709 | 70.00th=[ 330], 80.00th=[ 343], 90.00th=[ 359], 95.00th=[ 375], 00:14:58.709 | 99.00th=[ 449], 99.50th=[ 515], 99.90th=[ 619], 99.95th=[ 676], 00:14:58.709 | 99.99th=[ 676] 00:14:58.709 write: IOPS=1722, BW=6889KiB/s (7054kB/s)(6896KiB/1001msec); 0 zone resets 00:14:58.709 slat (usec): min=19, max=143, avg=37.27, stdev= 9.80 00:14:58.709 clat (usec): min=127, max=509, avg=238.20, stdev=34.01 00:14:58.709 lat (usec): min=156, max=552, avg=275.47, stdev=37.06 00:14:58.709 clat percentiles (usec): 00:14:58.709 | 1.00th=[ 143], 5.00th=[ 182], 10.00th=[ 200], 20.00th=[ 215], 00:14:58.709 | 30.00th=[ 225], 40.00th=[ 231], 50.00th=[ 239], 60.00th=[ 245], 00:14:58.709 | 70.00th=[ 255], 80.00th=[ 265], 90.00th=[ 281], 95.00th=[ 289], 00:14:58.709 | 99.00th=[ 310], 99.50th=[ 322], 99.90th=[ 465], 99.95th=[ 510], 00:14:58.709 | 99.99th=[ 510] 00:14:58.709 bw ( KiB/s): min= 8192, max= 8192, per=29.64%, avg=8192.00, stdev= 0.00, samples=1 00:14:58.709 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:14:58.709 lat (usec) : 250=35.00%, 500=64.72%, 750=0.28% 00:14:58.709 cpu : usr=1.90%, sys=8.10%, ctx=3260, majf=0, minf=13 00:14:58.709 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:58.709 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:58.709 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:58.709 issued rwts: total=1536,1724,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:58.709 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:58.710 00:14:58.710 Run status group 0 (all jobs): 00:14:58.710 READ: bw=24.4MiB/s (25.5MB/s), 5303KiB/s-7357KiB/s (5430kB/s-7533kB/s), io=24.4MiB (25.6MB), run=1001-1001msec 00:14:58.710 WRITE: bw=27.0MiB/s (28.3MB/s), 6138KiB/s-8184KiB/s (6285kB/s-8380kB/s), io=27.0MiB (28.3MB), run=1001-1001msec 00:14:58.710 00:14:58.710 Disk stats (read/write): 00:14:58.710 nvme0n1: ios=1586/1781, merge=0/0, ticks=498/350, in_queue=848, util=88.47% 00:14:58.710 nvme0n2: ios=1287/1536, merge=0/0, ticks=416/403, in_queue=819, util=88.87% 00:14:58.710 nvme0n3: ios=1051/1532, merge=0/0, ticks=482/399, in_queue=881, util=90.52% 00:14:58.710 nvme0n4: ios=1275/1536, merge=0/0, ticks=430/385, in_queue=815, util=90.22% 00:14:58.710 14:19:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:14:58.710 [global] 00:14:58.710 thread=1 00:14:58.710 invalidate=1 00:14:58.710 rw=randwrite 00:14:58.710 time_based=1 00:14:58.710 runtime=1 00:14:58.710 ioengine=libaio 00:14:58.710 direct=1 00:14:58.710 bs=4096 00:14:58.710 iodepth=1 00:14:58.710 norandommap=0 00:14:58.710 numjobs=1 00:14:58.710 00:14:58.710 verify_dump=1 00:14:58.710 verify_backlog=512 00:14:58.710 verify_state_save=0 00:14:58.710 do_verify=1 00:14:58.710 verify=crc32c-intel 00:14:58.710 [job0] 00:14:58.710 filename=/dev/nvme0n1 00:14:58.710 [job1] 00:14:58.710 filename=/dev/nvme0n2 00:14:58.710 [job2] 00:14:58.710 filename=/dev/nvme0n3 00:14:58.710 [job3] 00:14:58.710 filename=/dev/nvme0n4 00:14:58.710 Could not set queue depth (nvme0n1) 00:14:58.710 Could not set queue depth (nvme0n2) 00:14:58.710 Could not set queue depth (nvme0n3) 00:14:58.710 Could not set queue depth (nvme0n4) 00:14:58.710 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:58.710 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:58.710 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:58.710 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:58.710 fio-3.35 00:14:58.710 Starting 4 threads 00:15:00.127 00:15:00.127 job0: (groupid=0, jobs=1): err= 0: pid=69788: Wed Nov 6 14:19:27 2024 00:15:00.127 read: IOPS=1576, BW=6306KiB/s (6457kB/s)(6312KiB/1001msec) 00:15:00.127 slat (nsec): min=7789, max=67005, avg=17160.62, stdev=9662.56 00:15:00.127 clat (usec): min=145, max=2092, avg=300.44, stdev=146.20 00:15:00.127 lat (usec): min=156, max=2102, avg=317.60, stdev=153.28 00:15:00.127 clat percentiles (usec): 00:15:00.127 | 1.00th=[ 161], 5.00th=[ 174], 10.00th=[ 180], 20.00th=[ 190], 00:15:00.127 | 30.00th=[ 202], 40.00th=[ 212], 50.00th=[ 225], 60.00th=[ 249], 00:15:00.127 | 70.00th=[ 412], 80.00th=[ 461], 90.00th=[ 498], 95.00th=[ 529], 00:15:00.127 | 99.00th=[ 652], 99.50th=[ 717], 99.90th=[ 1516], 99.95th=[ 2089], 00:15:00.127 | 99.99th=[ 2089] 00:15:00.127 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:15:00.127 slat (usec): min=11, max=201, avg=24.52, stdev=12.90 00:15:00.127 clat (usec): min=97, max=3057, avg=215.88, stdev=118.21 00:15:00.127 lat (usec): min=114, max=3075, avg=240.40, stdev=125.67 00:15:00.127 clat percentiles (usec): 00:15:00.127 | 1.00th=[ 112], 5.00th=[ 121], 10.00th=[ 128], 20.00th=[ 139], 00:15:00.128 | 30.00th=[ 149], 40.00th=[ 157], 50.00th=[ 169], 60.00th=[ 184], 00:15:00.128 | 70.00th=[ 215], 80.00th=[ 338], 90.00th=[ 388], 95.00th=[ 416], 00:15:00.128 | 99.00th=[ 449], 99.50th=[ 465], 99.90th=[ 502], 99.95th=[ 701], 00:15:00.128 | 99.99th=[ 3064] 00:15:00.128 bw ( KiB/s): min=12016, max=12016, per=45.74%, avg=12016.00, stdev= 0.00, samples=1 00:15:00.128 iops : min= 3004, max= 3004, avg=3004.00, stdev= 0.00, samples=1 00:15:00.128 lat (usec) : 100=0.03%, 250=67.35%, 500=28.68%, 750=3.83% 00:15:00.128 lat (msec) : 2=0.06%, 4=0.06% 00:15:00.128 cpu : usr=2.30%, sys=5.90%, ctx=3627, majf=0, minf=11 00:15:00.128 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:00.128 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:00.128 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:00.128 issued rwts: total=1578,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:00.128 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:00.128 job1: (groupid=0, jobs=1): err= 0: pid=69789: Wed Nov 6 14:19:27 2024 00:15:00.128 read: IOPS=1481, BW=5926KiB/s (6068kB/s)(5932KiB/1001msec) 00:15:00.128 slat (nsec): min=9907, max=89268, avg=22303.47, stdev=10506.30 00:15:00.128 clat (usec): min=211, max=682, avg=313.99, stdev=39.00 00:15:00.128 lat (usec): min=227, max=721, avg=336.29, stdev=45.46 00:15:00.128 clat percentiles (usec): 00:15:00.128 | 1.00th=[ 241], 5.00th=[ 258], 10.00th=[ 269], 20.00th=[ 281], 00:15:00.128 | 30.00th=[ 289], 40.00th=[ 302], 50.00th=[ 314], 60.00th=[ 322], 00:15:00.128 | 70.00th=[ 334], 80.00th=[ 347], 90.00th=[ 363], 95.00th=[ 379], 00:15:00.128 | 99.00th=[ 408], 99.50th=[ 424], 99.90th=[ 537], 99.95th=[ 685], 00:15:00.128 | 99.99th=[ 685] 00:15:00.128 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:15:00.128 slat (usec): min=15, max=179, avg=49.18, stdev=12.40 00:15:00.128 clat (usec): min=150, max=4995, avg=270.93, stdev=129.63 00:15:00.128 lat (usec): min=170, max=5047, avg=320.11, stdev=131.78 00:15:00.128 clat percentiles (usec): 00:15:00.128 | 1.00th=[ 176], 5.00th=[ 212], 10.00th=[ 225], 20.00th=[ 239], 00:15:00.128 | 30.00th=[ 249], 40.00th=[ 260], 50.00th=[ 269], 60.00th=[ 277], 00:15:00.128 | 70.00th=[ 285], 80.00th=[ 297], 90.00th=[ 314], 95.00th=[ 326], 00:15:00.128 | 99.00th=[ 355], 99.50th=[ 379], 99.90th=[ 1532], 99.95th=[ 5014], 00:15:00.128 | 99.99th=[ 5014] 00:15:00.128 bw ( KiB/s): min= 7048, max= 7048, per=26.83%, avg=7048.00, stdev= 0.00, samples=1 00:15:00.128 iops : min= 1762, max= 1762, avg=1762.00, stdev= 0.00, samples=1 00:15:00.128 lat (usec) : 250=16.83%, 500=83.04%, 750=0.07% 00:15:00.128 lat (msec) : 2=0.03%, 10=0.03% 00:15:00.128 cpu : usr=2.70%, sys=8.60%, ctx=3020, majf=0, minf=21 00:15:00.128 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:00.128 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:00.128 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:00.128 issued rwts: total=1483,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:00.128 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:00.128 job2: (groupid=0, jobs=1): err= 0: pid=69790: Wed Nov 6 14:19:27 2024 00:15:00.128 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:15:00.128 slat (nsec): min=17789, max=78583, avg=27262.64, stdev=6570.33 00:15:00.128 clat (usec): min=252, max=753, avg=420.33, stdev=84.67 00:15:00.128 lat (usec): min=279, max=785, avg=447.60, stdev=82.20 00:15:00.128 clat percentiles (usec): 00:15:00.128 | 1.00th=[ 269], 5.00th=[ 306], 10.00th=[ 318], 20.00th=[ 338], 00:15:00.128 | 30.00th=[ 359], 40.00th=[ 379], 50.00th=[ 424], 60.00th=[ 453], 00:15:00.128 | 70.00th=[ 474], 80.00th=[ 494], 90.00th=[ 523], 95.00th=[ 545], 00:15:00.128 | 99.00th=[ 668], 99.50th=[ 701], 99.90th=[ 709], 99.95th=[ 758], 00:15:00.128 | 99.99th=[ 758] 00:15:00.128 write: IOPS=1452, BW=5810KiB/s (5950kB/s)(5816KiB/1001msec); 0 zone resets 00:15:00.128 slat (usec): min=10, max=347, avg=38.19, stdev=18.54 00:15:00.128 clat (usec): min=5, max=1251, avg=329.45, stdev=82.08 00:15:00.128 lat (usec): min=168, max=1280, avg=367.64, stdev=84.33 00:15:00.128 clat percentiles (usec): 00:15:00.128 | 1.00th=[ 188], 5.00th=[ 241], 10.00th=[ 260], 20.00th=[ 277], 00:15:00.128 | 30.00th=[ 289], 40.00th=[ 302], 50.00th=[ 314], 60.00th=[ 330], 00:15:00.128 | 70.00th=[ 351], 80.00th=[ 379], 90.00th=[ 420], 95.00th=[ 453], 00:15:00.128 | 99.00th=[ 562], 99.50th=[ 611], 99.90th=[ 1221], 99.95th=[ 1254], 00:15:00.128 | 99.99th=[ 1254] 00:15:00.128 bw ( KiB/s): min= 6624, max= 6624, per=25.22%, avg=6624.00, stdev= 0.00, samples=1 00:15:00.128 iops : min= 1656, max= 1656, avg=1656.00, stdev= 0.00, samples=1 00:15:00.128 lat (usec) : 10=0.04%, 100=0.04%, 250=4.12%, 500=87.25%, 750=8.31% 00:15:00.128 lat (usec) : 1000=0.12% 00:15:00.128 lat (msec) : 2=0.12% 00:15:00.128 cpu : usr=2.40%, sys=6.40%, ctx=2484, majf=0, minf=11 00:15:00.128 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:00.128 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:00.128 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:00.128 issued rwts: total=1024,1454,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:00.128 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:00.128 job3: (groupid=0, jobs=1): err= 0: pid=69791: Wed Nov 6 14:19:27 2024 00:15:00.128 read: IOPS=1032, BW=4132KiB/s (4231kB/s)(4136KiB/1001msec) 00:15:00.128 slat (nsec): min=9250, max=88101, avg=30704.01, stdev=8110.03 00:15:00.128 clat (usec): min=179, max=890, avg=396.72, stdev=94.80 00:15:00.128 lat (usec): min=193, max=921, avg=427.42, stdev=98.05 00:15:00.128 clat percentiles (usec): 00:15:00.128 | 1.00th=[ 198], 5.00th=[ 237], 10.00th=[ 293], 20.00th=[ 326], 00:15:00.128 | 30.00th=[ 343], 40.00th=[ 363], 50.00th=[ 383], 60.00th=[ 424], 00:15:00.128 | 70.00th=[ 453], 80.00th=[ 478], 90.00th=[ 506], 95.00th=[ 537], 00:15:00.128 | 99.00th=[ 635], 99.50th=[ 725], 99.90th=[ 857], 99.95th=[ 889], 00:15:00.128 | 99.99th=[ 889] 00:15:00.128 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:15:00.128 slat (usec): min=28, max=150, avg=45.28, stdev= 7.96 00:15:00.128 clat (usec): min=101, max=1135, avg=312.63, stdev=80.75 00:15:00.128 lat (usec): min=155, max=1176, avg=357.91, stdev=82.16 00:15:00.128 clat percentiles (usec): 00:15:00.128 | 1.00th=[ 151], 5.00th=[ 186], 10.00th=[ 219], 20.00th=[ 255], 00:15:00.128 | 30.00th=[ 277], 40.00th=[ 289], 50.00th=[ 306], 60.00th=[ 326], 00:15:00.128 | 70.00th=[ 347], 80.00th=[ 371], 90.00th=[ 404], 95.00th=[ 424], 00:15:00.128 | 99.00th=[ 611], 99.50th=[ 635], 99.90th=[ 660], 99.95th=[ 1139], 00:15:00.128 | 99.99th=[ 1139] 00:15:00.128 bw ( KiB/s): min= 7416, max= 7416, per=28.23%, avg=7416.00, stdev= 0.00, samples=1 00:15:00.128 iops : min= 1854, max= 1854, avg=1854.00, stdev= 0.00, samples=1 00:15:00.128 lat (usec) : 250=13.15%, 500=80.97%, 750=5.68%, 1000=0.16% 00:15:00.128 lat (msec) : 2=0.04% 00:15:00.128 cpu : usr=2.40%, sys=8.10%, ctx=2578, majf=0, minf=7 00:15:00.128 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:00.128 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:00.128 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:00.128 issued rwts: total=1034,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:00.128 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:00.128 00:15:00.128 Run status group 0 (all jobs): 00:15:00.128 READ: bw=20.0MiB/s (20.9MB/s), 4092KiB/s-6306KiB/s (4190kB/s-6457kB/s), io=20.0MiB (21.0MB), run=1001-1001msec 00:15:00.128 WRITE: bw=25.7MiB/s (26.9MB/s), 5810KiB/s-8184KiB/s (5950kB/s-8380kB/s), io=25.7MiB (26.9MB), run=1001-1001msec 00:15:00.128 00:15:00.128 Disk stats (read/write): 00:15:00.128 nvme0n1: ios=1586/1779, merge=0/0, ticks=492/343, in_queue=835, util=88.97% 00:15:00.128 nvme0n2: ios=1114/1536, merge=0/0, ticks=363/432, in_queue=795, util=89.38% 00:15:00.128 nvme0n3: ios=1060/1124, merge=0/0, ticks=461/364, in_queue=825, util=90.64% 00:15:00.128 nvme0n4: ios=1051/1247, merge=0/0, ticks=453/402, in_queue=855, util=90.90% 00:15:00.128 14:19:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:15:00.128 [global] 00:15:00.128 thread=1 00:15:00.128 invalidate=1 00:15:00.128 rw=write 00:15:00.128 time_based=1 00:15:00.128 runtime=1 00:15:00.128 ioengine=libaio 00:15:00.128 direct=1 00:15:00.128 bs=4096 00:15:00.128 iodepth=128 00:15:00.128 norandommap=0 00:15:00.128 numjobs=1 00:15:00.128 00:15:00.128 verify_dump=1 00:15:00.128 verify_backlog=512 00:15:00.128 verify_state_save=0 00:15:00.128 do_verify=1 00:15:00.128 verify=crc32c-intel 00:15:00.128 [job0] 00:15:00.128 filename=/dev/nvme0n1 00:15:00.128 [job1] 00:15:00.128 filename=/dev/nvme0n2 00:15:00.128 [job2] 00:15:00.128 filename=/dev/nvme0n3 00:15:00.128 [job3] 00:15:00.128 filename=/dev/nvme0n4 00:15:00.128 Could not set queue depth (nvme0n1) 00:15:00.128 Could not set queue depth (nvme0n2) 00:15:00.128 Could not set queue depth (nvme0n3) 00:15:00.128 Could not set queue depth (nvme0n4) 00:15:00.387 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:00.387 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:00.387 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:00.387 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:00.387 fio-3.35 00:15:00.387 Starting 4 threads 00:15:01.763 00:15:01.763 job0: (groupid=0, jobs=1): err= 0: pid=69846: Wed Nov 6 14:19:28 2024 00:15:01.763 read: IOPS=5749, BW=22.5MiB/s (23.5MB/s)(22.5MiB/1002msec) 00:15:01.763 slat (usec): min=7, max=2986, avg=77.85, stdev=312.66 00:15:01.763 clat (usec): min=329, max=14324, avg=10760.73, stdev=1181.11 00:15:01.763 lat (usec): min=2331, max=14354, avg=10838.58, stdev=1145.08 00:15:01.763 clat percentiles (usec): 00:15:01.763 | 1.00th=[ 5538], 5.00th=[ 9765], 10.00th=[10159], 20.00th=[10290], 00:15:01.763 | 30.00th=[10421], 40.00th=[10552], 50.00th=[10683], 60.00th=[10683], 00:15:01.763 | 70.00th=[10945], 80.00th=[11207], 90.00th=[11600], 95.00th=[13042], 00:15:01.763 | 99.00th=[14091], 99.50th=[14091], 99.90th=[14222], 99.95th=[14353], 00:15:01.763 | 99.99th=[14353] 00:15:01.763 write: IOPS=6131, BW=24.0MiB/s (25.1MB/s)(24.0MiB/1002msec); 0 zone resets 00:15:01.763 slat (usec): min=22, max=4206, avg=78.24, stdev=257.26 00:15:01.763 clat (usec): min=8135, max=17102, avg=10506.94, stdev=1163.66 00:15:01.763 lat (usec): min=8601, max=17134, avg=10585.18, stdev=1146.54 00:15:01.763 clat percentiles (usec): 00:15:01.763 | 1.00th=[ 8848], 5.00th=[ 9503], 10.00th=[ 9765], 20.00th=[ 9896], 00:15:01.763 | 30.00th=[10028], 40.00th=[10159], 50.00th=[10290], 60.00th=[10421], 00:15:01.763 | 70.00th=[10552], 80.00th=[10814], 90.00th=[11207], 95.00th=[12911], 00:15:01.763 | 99.00th=[16909], 99.50th=[16909], 99.90th=[17171], 99.95th=[17171], 00:15:01.763 | 99.99th=[17171] 00:15:01.763 bw ( KiB/s): min=24526, max=24625, per=48.93%, avg=24575.50, stdev=70.00, samples=2 00:15:01.763 iops : min= 6131, max= 6156, avg=6143.50, stdev=17.68, samples=2 00:15:01.763 lat (usec) : 500=0.01% 00:15:01.763 lat (msec) : 4=0.27%, 10=16.47%, 20=83.25% 00:15:01.763 cpu : usr=6.89%, sys=24.48%, ctx=417, majf=0, minf=1 00:15:01.763 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:15:01.763 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:01.763 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:01.763 issued rwts: total=5761,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:01.763 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:01.763 job1: (groupid=0, jobs=1): err= 0: pid=69847: Wed Nov 6 14:19:28 2024 00:15:01.763 read: IOPS=1528, BW=6113KiB/s (6260kB/s)(6144KiB/1005msec) 00:15:01.763 slat (usec): min=5, max=13877, avg=303.77, stdev=1090.06 00:15:01.763 clat (usec): min=10402, max=73928, avg=39634.77, stdev=20453.67 00:15:01.763 lat (usec): min=10410, max=73948, avg=39938.54, stdev=20586.98 00:15:01.763 clat percentiles (usec): 00:15:01.763 | 1.00th=[10421], 5.00th=[11600], 10.00th=[12518], 20.00th=[13042], 00:15:01.763 | 30.00th=[15533], 40.00th=[39060], 50.00th=[43779], 60.00th=[50070], 00:15:01.763 | 70.00th=[54264], 80.00th=[58983], 90.00th=[65799], 95.00th=[68682], 00:15:01.763 | 99.00th=[72877], 99.50th=[72877], 99.90th=[73925], 99.95th=[73925], 00:15:01.763 | 99.99th=[73925] 00:15:01.763 write: IOPS=1870, BW=7483KiB/s (7662kB/s)(7520KiB/1005msec); 0 zone resets 00:15:01.763 slat (usec): min=6, max=6691, avg=273.87, stdev=838.62 00:15:01.763 clat (usec): min=3484, max=66457, avg=35141.72, stdev=16318.62 00:15:01.763 lat (usec): min=5388, max=69947, avg=35415.59, stdev=16424.34 00:15:01.763 clat percentiles (usec): 00:15:01.763 | 1.00th=[10945], 5.00th=[12518], 10.00th=[12780], 20.00th=[15008], 00:15:01.763 | 30.00th=[18482], 40.00th=[33817], 50.00th=[38011], 60.00th=[41681], 00:15:01.763 | 70.00th=[44827], 80.00th=[50594], 90.00th=[56886], 95.00th=[60556], 00:15:01.763 | 99.00th=[63701], 99.50th=[63701], 99.90th=[66323], 99.95th=[66323], 00:15:01.763 | 99.99th=[66323] 00:15:01.763 bw ( KiB/s): min= 5812, max= 8208, per=13.96%, avg=7010.00, stdev=1694.23, samples=2 00:15:01.763 iops : min= 1453, max= 2052, avg=1752.50, stdev=423.56, samples=2 00:15:01.763 lat (msec) : 4=0.03%, 10=0.47%, 20=31.53%, 50=38.26%, 100=29.71% 00:15:01.763 cpu : usr=1.99%, sys=7.37%, ctx=598, majf=0, minf=1 00:15:01.763 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:15:01.763 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:01.763 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:01.763 issued rwts: total=1536,1880,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:01.763 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:01.763 job2: (groupid=0, jobs=1): err= 0: pid=69848: Wed Nov 6 14:19:28 2024 00:15:01.763 read: IOPS=1292, BW=5169KiB/s (5293kB/s)(5200KiB/1006msec) 00:15:01.763 slat (usec): min=9, max=11783, avg=418.07, stdev=1291.89 00:15:01.763 clat (usec): min=5197, max=75225, avg=49758.67, stdev=13381.34 00:15:01.763 lat (usec): min=5217, max=75249, avg=50176.74, stdev=13388.94 00:15:01.763 clat percentiles (usec): 00:15:01.763 | 1.00th=[16450], 5.00th=[30016], 10.00th=[33162], 20.00th=[39060], 00:15:01.763 | 30.00th=[43779], 40.00th=[46924], 50.00th=[49546], 60.00th=[52167], 00:15:01.763 | 70.00th=[56886], 80.00th=[61604], 90.00th=[67634], 95.00th=[71828], 00:15:01.763 | 99.00th=[74974], 99.50th=[74974], 99.90th=[74974], 99.95th=[74974], 00:15:01.763 | 99.99th=[74974] 00:15:01.763 write: IOPS=1526, BW=6107KiB/s (6254kB/s)(6144KiB/1006msec); 0 zone resets 00:15:01.763 slat (usec): min=14, max=8350, avg=284.59, stdev=899.08 00:15:01.763 clat (usec): min=17668, max=72735, avg=40346.05, stdev=12578.17 00:15:01.763 lat (usec): min=17822, max=72777, avg=40630.65, stdev=12635.35 00:15:01.763 clat percentiles (usec): 00:15:01.763 | 1.00th=[23725], 5.00th=[24511], 10.00th=[26084], 20.00th=[28181], 00:15:01.763 | 30.00th=[29230], 40.00th=[33817], 50.00th=[37487], 60.00th=[43779], 00:15:01.763 | 70.00th=[49021], 80.00th=[53216], 90.00th=[57934], 95.00th=[61080], 00:15:01.763 | 99.00th=[68682], 99.50th=[70779], 99.90th=[71828], 99.95th=[72877], 00:15:01.763 | 99.99th=[72877] 00:15:01.763 bw ( KiB/s): min= 4423, max= 7856, per=12.22%, avg=6139.50, stdev=2427.50, samples=2 00:15:01.763 iops : min= 1105, max= 1964, avg=1534.50, stdev=607.40, samples=2 00:15:01.764 lat (msec) : 10=0.32%, 20=1.30%, 50=62.24%, 100=36.14% 00:15:01.764 cpu : usr=2.59%, sys=6.47%, ctx=585, majf=0, minf=5 00:15:01.764 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:15:01.764 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:01.764 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:01.764 issued rwts: total=1300,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:01.764 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:01.764 job3: (groupid=0, jobs=1): err= 0: pid=69849: Wed Nov 6 14:19:28 2024 00:15:01.764 read: IOPS=2933, BW=11.5MiB/s (12.0MB/s)(11.5MiB/1004msec) 00:15:01.764 slat (usec): min=13, max=10298, avg=179.84, stdev=716.26 00:15:01.764 clat (usec): min=1955, max=30385, avg=23088.02, stdev=3698.02 00:15:01.764 lat (usec): min=3911, max=30405, avg=23267.86, stdev=3653.58 00:15:01.764 clat percentiles (usec): 00:15:01.764 | 1.00th=[ 7046], 5.00th=[17957], 10.00th=[19530], 20.00th=[21103], 00:15:01.764 | 30.00th=[21627], 40.00th=[22414], 50.00th=[22676], 60.00th=[23725], 00:15:01.764 | 70.00th=[24511], 80.00th=[26608], 90.00th=[27657], 95.00th=[28443], 00:15:01.764 | 99.00th=[29754], 99.50th=[30278], 99.90th=[30278], 99.95th=[30278], 00:15:01.764 | 99.99th=[30278] 00:15:01.764 write: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec); 0 zone resets 00:15:01.764 slat (usec): min=23, max=8365, avg=141.06, stdev=622.95 00:15:01.764 clat (usec): min=12200, max=31315, avg=18829.46, stdev=4363.15 00:15:01.764 lat (usec): min=14864, max=31348, avg=18970.52, stdev=4350.35 00:15:01.764 clat percentiles (usec): 00:15:01.764 | 1.00th=[13566], 5.00th=[15008], 10.00th=[15664], 20.00th=[16057], 00:15:01.764 | 30.00th=[16188], 40.00th=[16450], 50.00th=[16909], 60.00th=[17171], 00:15:01.764 | 70.00th=[18744], 80.00th=[21627], 90.00th=[26608], 95.00th=[29230], 00:15:01.764 | 99.00th=[30802], 99.50th=[31065], 99.90th=[31327], 99.95th=[31327], 00:15:01.764 | 99.99th=[31327] 00:15:01.764 bw ( KiB/s): min=12263, max=12312, per=24.46%, avg=12287.50, stdev=34.65, samples=2 00:15:01.764 iops : min= 3065, max= 3078, avg=3071.50, stdev= 9.19, samples=2 00:15:01.764 lat (msec) : 2=0.02%, 4=0.07%, 10=0.47%, 20=45.07%, 50=54.38% 00:15:01.764 cpu : usr=3.99%, sys=12.66%, ctx=268, majf=0, minf=2 00:15:01.764 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:15:01.764 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:01.764 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:01.764 issued rwts: total=2945,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:01.764 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:01.764 00:15:01.764 Run status group 0 (all jobs): 00:15:01.764 READ: bw=44.8MiB/s (47.0MB/s), 5169KiB/s-22.5MiB/s (5293kB/s-23.5MB/s), io=45.1MiB (47.3MB), run=1002-1006msec 00:15:01.764 WRITE: bw=49.0MiB/s (51.4MB/s), 6107KiB/s-24.0MiB/s (6254kB/s-25.1MB/s), io=49.3MiB (51.7MB), run=1002-1006msec 00:15:01.764 00:15:01.764 Disk stats (read/write): 00:15:01.764 nvme0n1: ios=4977/5120, merge=0/0, ticks=11213/10170, in_queue=21383, util=86.83% 00:15:01.764 nvme0n2: ios=1461/1536, merge=0/0, ticks=13957/12030, in_queue=25987, util=86.67% 00:15:01.764 nvme0n3: ios=1024/1463, merge=0/0, ticks=12755/12706, in_queue=25461, util=89.02% 00:15:01.764 nvme0n4: ios=2450/2560, merge=0/0, ticks=14244/10036, in_queue=24280, util=89.37% 00:15:01.764 14:19:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:15:01.764 [global] 00:15:01.764 thread=1 00:15:01.764 invalidate=1 00:15:01.764 rw=randwrite 00:15:01.764 time_based=1 00:15:01.764 runtime=1 00:15:01.764 ioengine=libaio 00:15:01.764 direct=1 00:15:01.764 bs=4096 00:15:01.764 iodepth=128 00:15:01.764 norandommap=0 00:15:01.764 numjobs=1 00:15:01.764 00:15:01.764 verify_dump=1 00:15:01.764 verify_backlog=512 00:15:01.764 verify_state_save=0 00:15:01.764 do_verify=1 00:15:01.764 verify=crc32c-intel 00:15:01.764 [job0] 00:15:01.764 filename=/dev/nvme0n1 00:15:01.764 [job1] 00:15:01.764 filename=/dev/nvme0n2 00:15:01.764 [job2] 00:15:01.764 filename=/dev/nvme0n3 00:15:01.764 [job3] 00:15:01.764 filename=/dev/nvme0n4 00:15:01.764 Could not set queue depth (nvme0n1) 00:15:01.764 Could not set queue depth (nvme0n2) 00:15:01.764 Could not set queue depth (nvme0n3) 00:15:01.764 Could not set queue depth (nvme0n4) 00:15:01.764 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:01.764 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:01.764 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:01.764 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:01.764 fio-3.35 00:15:01.764 Starting 4 threads 00:15:03.139 00:15:03.139 job0: (groupid=0, jobs=1): err= 0: pid=69913: Wed Nov 6 14:19:30 2024 00:15:03.139 read: IOPS=3888, BW=15.2MiB/s (15.9MB/s)(15.2MiB/1003msec) 00:15:03.139 slat (usec): min=9, max=6708, avg=123.75, stdev=534.06 00:15:03.139 clat (usec): min=1139, max=22879, avg=16217.61, stdev=1869.16 00:15:03.139 lat (usec): min=5289, max=22924, avg=16341.37, stdev=1872.19 00:15:03.139 clat percentiles (usec): 00:15:03.139 | 1.00th=[ 6325], 5.00th=[13829], 10.00th=[14746], 20.00th=[15270], 00:15:03.139 | 30.00th=[15664], 40.00th=[15926], 50.00th=[16188], 60.00th=[16581], 00:15:03.139 | 70.00th=[16909], 80.00th=[17433], 90.00th=[18220], 95.00th=[19006], 00:15:03.139 | 99.00th=[20055], 99.50th=[20841], 99.90th=[21627], 99.95th=[22676], 00:15:03.139 | 99.99th=[22938] 00:15:03.139 write: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec); 0 zone resets 00:15:03.139 slat (usec): min=22, max=8212, avg=114.27, stdev=629.77 00:15:03.139 clat (usec): min=8448, max=23829, avg=15480.80, stdev=1598.94 00:15:03.139 lat (usec): min=8502, max=24568, avg=15595.06, stdev=1704.24 00:15:03.139 clat percentiles (usec): 00:15:03.139 | 1.00th=[10814], 5.00th=[13042], 10.00th=[13435], 20.00th=[14484], 00:15:03.139 | 30.00th=[14877], 40.00th=[15270], 50.00th=[15533], 60.00th=[15795], 00:15:03.139 | 70.00th=[16057], 80.00th=[16450], 90.00th=[17171], 95.00th=[18220], 00:15:03.139 | 99.00th=[20579], 99.50th=[21103], 99.90th=[22414], 99.95th=[23200], 00:15:03.139 | 99.99th=[23725] 00:15:03.140 bw ( KiB/s): min=16384, max=16384, per=38.75%, avg=16384.00, stdev= 0.00, samples=2 00:15:03.140 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:15:03.140 lat (msec) : 2=0.01%, 10=0.83%, 20=97.67%, 50=1.49% 00:15:03.140 cpu : usr=4.39%, sys=17.27%, ctx=305, majf=0, minf=2 00:15:03.140 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:15:03.140 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:03.140 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:03.140 issued rwts: total=3900,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:03.140 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:03.140 job1: (groupid=0, jobs=1): err= 0: pid=69914: Wed Nov 6 14:19:30 2024 00:15:03.140 read: IOPS=1013, BW=4055KiB/s (4153kB/s)(4096KiB/1010msec) 00:15:03.140 slat (usec): min=9, max=20421, avg=361.15, stdev=1738.77 00:15:03.140 clat (msec): min=30, max=108, avg=44.32, stdev=15.50 00:15:03.140 lat (msec): min=32, max=108, avg=44.68, stdev=15.69 00:15:03.140 clat percentiles (msec): 00:15:03.140 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 35], 00:15:03.140 | 30.00th=[ 35], 40.00th=[ 36], 50.00th=[ 37], 60.00th=[ 39], 00:15:03.140 | 70.00th=[ 44], 80.00th=[ 59], 90.00th=[ 70], 95.00th=[ 77], 00:15:03.140 | 99.00th=[ 97], 99.50th=[ 99], 99.90th=[ 99], 99.95th=[ 109], 00:15:03.140 | 99.99th=[ 109] 00:15:03.140 write: IOPS=1285, BW=5141KiB/s (5264kB/s)(5192KiB/1010msec); 0 zone resets 00:15:03.140 slat (usec): min=23, max=21263, avg=476.84, stdev=1929.71 00:15:03.140 clat (msec): min=5, max=121, avg=61.73, stdev=29.44 00:15:03.140 lat (msec): min=14, max=121, avg=62.21, stdev=29.56 00:15:03.140 clat percentiles (msec): 00:15:03.140 | 1.00th=[ 16], 5.00th=[ 18], 10.00th=[ 24], 20.00th=[ 36], 00:15:03.140 | 30.00th=[ 40], 40.00th=[ 45], 50.00th=[ 59], 60.00th=[ 68], 00:15:03.140 | 70.00th=[ 79], 80.00th=[ 97], 90.00th=[ 105], 95.00th=[ 110], 00:15:03.140 | 99.00th=[ 120], 99.50th=[ 122], 99.90th=[ 122], 99.95th=[ 122], 00:15:03.140 | 99.99th=[ 122] 00:15:03.140 bw ( KiB/s): min= 3840, max= 5531, per=11.08%, avg=4685.50, stdev=1195.72, samples=2 00:15:03.140 iops : min= 960, max= 1382, avg=1171.00, stdev=298.40, samples=2 00:15:03.140 lat (msec) : 10=0.04%, 20=2.80%, 50=55.00%, 100=32.13%, 250=10.03% 00:15:03.140 cpu : usr=1.78%, sys=5.15%, ctx=156, majf=0, minf=15 00:15:03.140 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.4%, >=64=97.3% 00:15:03.140 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:03.140 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:03.140 issued rwts: total=1024,1298,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:03.140 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:03.140 job2: (groupid=0, jobs=1): err= 0: pid=69915: Wed Nov 6 14:19:30 2024 00:15:03.140 read: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec) 00:15:03.140 slat (usec): min=9, max=6614, avg=183.75, stdev=906.90 00:15:03.140 clat (usec): min=17106, max=27175, avg=24229.63, stdev=1318.04 00:15:03.140 lat (usec): min=21856, max=27195, avg=24413.38, stdev=969.97 00:15:03.140 clat percentiles (usec): 00:15:03.140 | 1.00th=[18744], 5.00th=[22152], 10.00th=[22938], 20.00th=[23462], 00:15:03.140 | 30.00th=[23725], 40.00th=[23987], 50.00th=[24249], 60.00th=[24511], 00:15:03.140 | 70.00th=[24773], 80.00th=[25035], 90.00th=[25822], 95.00th=[25822], 00:15:03.140 | 99.00th=[27132], 99.50th=[27132], 99.90th=[27132], 99.95th=[27132], 00:15:03.140 | 99.99th=[27132] 00:15:03.140 write: IOPS=2707, BW=10.6MiB/s (11.1MB/s)(10.6MiB/1005msec); 0 zone resets 00:15:03.140 slat (usec): min=15, max=6847, avg=183.25, stdev=837.18 00:15:03.140 clat (usec): min=460, max=26460, avg=23617.67, stdev=2791.15 00:15:03.140 lat (usec): min=5662, max=26492, avg=23800.93, stdev=2663.26 00:15:03.140 clat percentiles (usec): 00:15:03.140 | 1.00th=[ 6652], 5.00th=[19530], 10.00th=[22152], 20.00th=[22938], 00:15:03.140 | 30.00th=[23462], 40.00th=[23987], 50.00th=[24249], 60.00th=[24511], 00:15:03.140 | 70.00th=[24773], 80.00th=[25035], 90.00th=[25560], 95.00th=[25822], 00:15:03.140 | 99.00th=[26346], 99.50th=[26346], 99.90th=[26346], 99.95th=[26346], 00:15:03.140 | 99.99th=[26346] 00:15:03.140 bw ( KiB/s): min= 8704, max=12064, per=24.56%, avg=10384.00, stdev=2375.88, samples=2 00:15:03.140 iops : min= 2176, max= 3016, avg=2596.00, stdev=593.97, samples=2 00:15:03.140 lat (usec) : 500=0.02% 00:15:03.140 lat (msec) : 10=0.61%, 20=3.58%, 50=95.80% 00:15:03.140 cpu : usr=3.49%, sys=11.25%, ctx=167, majf=0, minf=1 00:15:03.140 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:15:03.140 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:03.140 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:03.140 issued rwts: total=2560,2721,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:03.140 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:03.140 job3: (groupid=0, jobs=1): err= 0: pid=69916: Wed Nov 6 14:19:30 2024 00:15:03.140 read: IOPS=2484, BW=9937KiB/s (10.2MB/s)(9.80MiB/1010msec) 00:15:03.140 slat (usec): min=18, max=32231, avg=212.01, stdev=1467.33 00:15:03.140 clat (usec): min=4691, max=60607, avg=29022.30, stdev=6759.15 00:15:03.140 lat (usec): min=12576, max=60652, avg=29234.31, stdev=6839.47 00:15:03.140 clat percentiles (usec): 00:15:03.140 | 1.00th=[13304], 5.00th=[18744], 10.00th=[22938], 20.00th=[24511], 00:15:03.140 | 30.00th=[25035], 40.00th=[25560], 50.00th=[26346], 60.00th=[30540], 00:15:03.140 | 70.00th=[33817], 80.00th=[34866], 90.00th=[36439], 95.00th=[43254], 00:15:03.140 | 99.00th=[46400], 99.50th=[46400], 99.90th=[49021], 99.95th=[53740], 00:15:03.140 | 99.99th=[60556] 00:15:03.140 write: IOPS=2534, BW=9.90MiB/s (10.4MB/s)(10.0MiB/1010msec); 0 zone resets 00:15:03.140 slat (usec): min=11, max=18729, avg=172.19, stdev=1109.62 00:15:03.140 clat (usec): min=10205, max=46099, avg=21513.59, stdev=4424.30 00:15:03.140 lat (usec): min=13954, max=46158, avg=21685.78, stdev=4346.42 00:15:03.140 clat percentiles (usec): 00:15:03.140 | 1.00th=[13960], 5.00th=[15926], 10.00th=[17695], 20.00th=[18744], 00:15:03.140 | 30.00th=[19006], 40.00th=[19530], 50.00th=[20317], 60.00th=[21890], 00:15:03.140 | 70.00th=[22414], 80.00th=[24511], 90.00th=[28443], 95.00th=[28967], 00:15:03.140 | 99.00th=[36439], 99.50th=[36963], 99.90th=[36963], 99.95th=[36963], 00:15:03.140 | 99.99th=[45876] 00:15:03.140 bw ( KiB/s): min= 9736, max=10765, per=24.24%, avg=10250.50, stdev=727.61, samples=2 00:15:03.140 iops : min= 2434, max= 2691, avg=2562.50, stdev=181.73, samples=2 00:15:03.140 lat (msec) : 10=0.02%, 20=27.46%, 50=72.48%, 100=0.04% 00:15:03.140 cpu : usr=3.27%, sys=10.31%, ctx=109, majf=0, minf=3 00:15:03.140 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:15:03.140 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:03.140 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:03.140 issued rwts: total=2509,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:03.140 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:03.140 00:15:03.140 Run status group 0 (all jobs): 00:15:03.140 READ: bw=38.6MiB/s (40.5MB/s), 4055KiB/s-15.2MiB/s (4153kB/s-15.9MB/s), io=39.0MiB (40.9MB), run=1003-1010msec 00:15:03.140 WRITE: bw=41.3MiB/s (43.3MB/s), 5141KiB/s-16.0MiB/s (5264kB/s-16.7MB/s), io=41.7MiB (43.7MB), run=1003-1010msec 00:15:03.140 00:15:03.140 Disk stats (read/write): 00:15:03.140 nvme0n1: ios=3248/3584, merge=0/0, ticks=25212/22287, in_queue=47499, util=87.27% 00:15:03.140 nvme0n2: ios=1073/1055, merge=0/0, ticks=14623/19513, in_queue=34136, util=88.45% 00:15:03.140 nvme0n3: ios=2048/2432, merge=0/0, ticks=11356/12896, in_queue=24252, util=88.87% 00:15:03.140 nvme0n4: ios=2038/2056, merge=0/0, ticks=58911/42449, in_queue=101360, util=89.52% 00:15:03.140 14:19:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:15:03.140 14:19:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=69929 00:15:03.140 14:19:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:15:03.140 14:19:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:15:03.140 [global] 00:15:03.140 thread=1 00:15:03.140 invalidate=1 00:15:03.140 rw=read 00:15:03.140 time_based=1 00:15:03.140 runtime=10 00:15:03.140 ioengine=libaio 00:15:03.140 direct=1 00:15:03.140 bs=4096 00:15:03.140 iodepth=1 00:15:03.140 norandommap=1 00:15:03.140 numjobs=1 00:15:03.140 00:15:03.140 [job0] 00:15:03.140 filename=/dev/nvme0n1 00:15:03.140 [job1] 00:15:03.140 filename=/dev/nvme0n2 00:15:03.140 [job2] 00:15:03.140 filename=/dev/nvme0n3 00:15:03.140 [job3] 00:15:03.140 filename=/dev/nvme0n4 00:15:03.140 Could not set queue depth (nvme0n1) 00:15:03.140 Could not set queue depth (nvme0n2) 00:15:03.140 Could not set queue depth (nvme0n3) 00:15:03.140 Could not set queue depth (nvme0n4) 00:15:03.140 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:03.140 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:03.140 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:03.140 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:03.140 fio-3.35 00:15:03.140 Starting 4 threads 00:15:06.421 14:19:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:15:06.421 fio: pid=69972, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:15:06.421 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=35930112, buflen=4096 00:15:06.421 14:19:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:15:06.421 fio: pid=69971, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:15:06.421 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=40878080, buflen=4096 00:15:06.421 14:19:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:06.421 14:19:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:15:06.680 fio: pid=69969, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:15:06.680 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=42954752, buflen=4096 00:15:06.938 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:06.938 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:15:07.197 fio: pid=69970, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:15:07.197 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=54157312, buflen=4096 00:15:07.197 00:15:07.197 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=69969: Wed Nov 6 14:19:34 2024 00:15:07.197 read: IOPS=3172, BW=12.4MiB/s (13.0MB/s)(41.0MiB/3306msec) 00:15:07.197 slat (usec): min=6, max=18665, avg=14.96, stdev=225.45 00:15:07.197 clat (usec): min=149, max=3944, avg=299.06, stdev=85.76 00:15:07.197 lat (usec): min=162, max=22610, avg=314.02, stdev=266.95 00:15:07.197 clat percentiles (usec): 00:15:07.197 | 1.00th=[ 188], 5.00th=[ 245], 10.00th=[ 258], 20.00th=[ 273], 00:15:07.197 | 30.00th=[ 281], 40.00th=[ 285], 50.00th=[ 293], 60.00th=[ 302], 00:15:07.197 | 70.00th=[ 310], 80.00th=[ 318], 90.00th=[ 334], 95.00th=[ 351], 00:15:07.197 | 99.00th=[ 449], 99.50th=[ 515], 99.90th=[ 1303], 99.95th=[ 1909], 00:15:07.197 | 99.99th=[ 3818] 00:15:07.197 bw ( KiB/s): min=11872, max=13029, per=28.03%, avg=12621.17, stdev=444.40, samples=6 00:15:07.197 iops : min= 2968, max= 3257, avg=3155.00, stdev=111.06, samples=6 00:15:07.197 lat (usec) : 250=6.53%, 500=92.90%, 750=0.36%, 1000=0.07% 00:15:07.197 lat (msec) : 2=0.09%, 4=0.05% 00:15:07.197 cpu : usr=0.79%, sys=3.18%, ctx=10494, majf=0, minf=1 00:15:07.197 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:07.197 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:07.197 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:07.197 issued rwts: total=10488,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:07.197 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:07.197 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=69970: Wed Nov 6 14:19:34 2024 00:15:07.197 read: IOPS=3505, BW=13.7MiB/s (14.4MB/s)(51.6MiB/3772msec) 00:15:07.197 slat (usec): min=6, max=13696, avg=15.98, stdev=232.54 00:15:07.197 clat (usec): min=121, max=4212, avg=268.15, stdev=81.63 00:15:07.197 lat (usec): min=128, max=13962, avg=284.13, stdev=247.86 00:15:07.197 clat percentiles (usec): 00:15:07.197 | 1.00th=[ 133], 5.00th=[ 145], 10.00th=[ 159], 20.00th=[ 194], 00:15:07.197 | 30.00th=[ 260], 40.00th=[ 273], 50.00th=[ 281], 60.00th=[ 293], 00:15:07.197 | 70.00th=[ 302], 80.00th=[ 314], 90.00th=[ 330], 95.00th=[ 343], 00:15:07.197 | 99.00th=[ 392], 99.50th=[ 494], 99.90th=[ 824], 99.95th=[ 1172], 00:15:07.197 | 99.99th=[ 2057] 00:15:07.197 bw ( KiB/s): min=12048, max=19198, per=30.51%, avg=13739.71, stdev=2439.65, samples=7 00:15:07.197 iops : min= 3012, max= 4799, avg=3434.71, stdev=609.77, samples=7 00:15:07.197 lat (usec) : 250=26.00%, 500=73.52%, 750=0.34%, 1000=0.07% 00:15:07.197 lat (msec) : 2=0.05%, 4=0.01%, 10=0.01% 00:15:07.197 cpu : usr=0.69%, sys=3.74%, ctx=13230, majf=0, minf=2 00:15:07.197 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:07.197 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:07.197 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:07.197 issued rwts: total=13223,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:07.197 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:07.197 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=69971: Wed Nov 6 14:19:34 2024 00:15:07.197 read: IOPS=3249, BW=12.7MiB/s (13.3MB/s)(39.0MiB/3072msec) 00:15:07.197 slat (usec): min=7, max=13358, avg=20.18, stdev=159.26 00:15:07.197 clat (usec): min=146, max=3973, avg=286.09, stdev=82.25 00:15:07.197 lat (usec): min=159, max=13604, avg=306.27, stdev=179.52 00:15:07.197 clat percentiles (usec): 00:15:07.197 | 1.00th=[ 180], 5.00th=[ 196], 10.00th=[ 204], 20.00th=[ 225], 00:15:07.197 | 30.00th=[ 262], 40.00th=[ 285], 50.00th=[ 297], 60.00th=[ 306], 00:15:07.197 | 70.00th=[ 318], 80.00th=[ 330], 90.00th=[ 347], 95.00th=[ 359], 00:15:07.197 | 99.00th=[ 388], 99.50th=[ 404], 99.90th=[ 824], 99.95th=[ 2114], 00:15:07.197 | 99.99th=[ 3982] 00:15:07.197 bw ( KiB/s): min=11768, max=16208, per=28.51%, avg=12836.40, stdev=1894.83, samples=5 00:15:07.197 iops : min= 2942, max= 4052, avg=3208.80, stdev=473.84, samples=5 00:15:07.197 lat (usec) : 250=27.59%, 500=72.28%, 750=0.02%, 1000=0.02% 00:15:07.197 lat (msec) : 2=0.03%, 4=0.05% 00:15:07.198 cpu : usr=0.91%, sys=5.37%, ctx=9994, majf=0, minf=2 00:15:07.198 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:07.198 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:07.198 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:07.198 issued rwts: total=9981,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:07.198 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:07.198 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=69972: Wed Nov 6 14:19:34 2024 00:15:07.198 read: IOPS=3095, BW=12.1MiB/s (12.7MB/s)(34.3MiB/2834msec) 00:15:07.198 slat (usec): min=7, max=605, avg=18.00, stdev=11.55 00:15:07.198 clat (usec): min=157, max=7115, avg=303.03, stdev=124.88 00:15:07.198 lat (usec): min=171, max=7123, avg=321.03, stdev=127.13 00:15:07.198 clat percentiles (usec): 00:15:07.198 | 1.00th=[ 192], 5.00th=[ 202], 10.00th=[ 215], 20.00th=[ 260], 00:15:07.198 | 30.00th=[ 285], 40.00th=[ 297], 50.00th=[ 306], 60.00th=[ 314], 00:15:07.198 | 70.00th=[ 322], 80.00th=[ 334], 90.00th=[ 355], 95.00th=[ 375], 00:15:07.198 | 99.00th=[ 482], 99.50th=[ 502], 99.90th=[ 1696], 99.95th=[ 3458], 00:15:07.198 | 99.99th=[ 7111] 00:15:07.198 bw ( KiB/s): min=11281, max=15952, per=27.78%, avg=12509.00, stdev=1938.90, samples=5 00:15:07.198 iops : min= 2820, max= 3988, avg=3127.20, stdev=484.77, samples=5 00:15:07.198 lat (usec) : 250=18.56%, 500=80.91%, 750=0.35%, 1000=0.03% 00:15:07.198 lat (msec) : 2=0.05%, 4=0.08%, 10=0.01% 00:15:07.198 cpu : usr=1.31%, sys=4.91%, ctx=8775, majf=0, minf=2 00:15:07.198 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:07.198 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:07.198 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:07.198 issued rwts: total=8773,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:07.198 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:07.198 00:15:07.198 Run status group 0 (all jobs): 00:15:07.198 READ: bw=44.0MiB/s (46.1MB/s), 12.1MiB/s-13.7MiB/s (12.7MB/s-14.4MB/s), io=166MiB (174MB), run=2834-3772msec 00:15:07.198 00:15:07.198 Disk stats (read/write): 00:15:07.198 nvme0n1: ios=9827/0, merge=0/0, ticks=2975/0, in_queue=2975, util=94.91% 00:15:07.198 nvme0n2: ios=12388/0, merge=0/0, ticks=3458/0, in_queue=3458, util=95.21% 00:15:07.198 nvme0n3: ios=9158/0, merge=0/0, ticks=2720/0, in_queue=2720, util=96.53% 00:15:07.198 nvme0n4: ios=8168/0, merge=0/0, ticks=2442/0, in_queue=2442, util=96.10% 00:15:07.457 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:07.457 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:15:08.025 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:08.025 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:15:08.285 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:08.285 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:15:08.852 14:19:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:08.852 14:19:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:15:09.112 14:19:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:09.112 14:19:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:15:09.680 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:15:09.680 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 69929 00:15:09.680 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:15:09.680 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:09.680 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:09.680 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:09.680 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1221 -- # local i=0 00:15:09.680 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:15:09.680 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:09.680 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:15:09.680 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:09.680 nvmf hotplug test: fio failed as expected 00:15:09.680 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1233 -- # return 0 00:15:09.680 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:15:09.680 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:15:09.680 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:09.939 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:15:09.939 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:15:09.939 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:15:09.939 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:15:09.939 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:15:09.939 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:09.939 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:15:09.939 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:09.939 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:15:09.939 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:09.939 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:09.939 rmmod nvme_tcp 00:15:09.939 rmmod nvme_fabrics 00:15:09.939 rmmod nvme_keyring 00:15:09.939 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:09.939 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:15:09.939 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:15:09.939 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 69542 ']' 00:15:09.939 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 69542 00:15:09.939 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' -z 69542 ']' 00:15:09.939 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # kill -0 69542 00:15:10.200 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # uname 00:15:10.200 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:10.200 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69542 00:15:10.200 killing process with pid 69542 00:15:10.200 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:10.200 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:10.200 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69542' 00:15:10.200 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@971 -- # kill 69542 00:15:10.200 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@976 -- # wait 69542 00:15:11.588 14:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:11.588 14:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:11.588 14:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:11.588 14:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:15:11.588 14:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:15:11.588 14:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:11.588 14:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:15:11.588 14:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:11.588 14:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:11.588 14:19:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:11.588 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:11.588 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:11.588 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:11.588 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:11.588 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:11.588 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:11.588 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:11.588 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:11.588 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:11.588 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:11.588 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:11.588 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:11.846 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:11.846 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:11.846 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:11.846 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:11.846 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:15:11.846 00:15:11.846 real 0m23.167s 00:15:11.846 user 1m24.193s 00:15:11.846 sys 0m11.277s 00:15:11.846 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:11.846 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.846 ************************************ 00:15:11.846 END TEST nvmf_fio_target 00:15:11.846 ************************************ 00:15:11.846 14:19:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:15:11.846 14:19:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:11.846 14:19:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:11.846 14:19:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:15:11.846 ************************************ 00:15:11.846 START TEST nvmf_bdevio 00:15:11.846 ************************************ 00:15:11.846 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:15:12.105 * Looking for test storage... 00:15:12.105 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:12.105 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:12.105 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:15:12.105 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:12.105 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:12.105 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:12.105 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:12.105 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:12.105 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:15:12.105 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:15:12.105 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:15:12.105 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:15:12.105 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:15:12.106 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:15:12.106 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:15:12.106 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:12.106 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:15:12.106 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:15:12.106 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:12.106 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:12.106 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:15:12.106 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:15:12.106 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:12.106 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:15:12.106 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:15:12.106 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:15:12.106 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:15:12.106 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:12.106 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:15:12.106 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:15:12.106 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:12.106 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:12.106 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:15:12.106 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:12.106 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:12.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.106 --rc genhtml_branch_coverage=1 00:15:12.106 --rc genhtml_function_coverage=1 00:15:12.106 --rc genhtml_legend=1 00:15:12.106 --rc geninfo_all_blocks=1 00:15:12.106 --rc geninfo_unexecuted_blocks=1 00:15:12.106 00:15:12.106 ' 00:15:12.106 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:12.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.106 --rc genhtml_branch_coverage=1 00:15:12.106 --rc genhtml_function_coverage=1 00:15:12.106 --rc genhtml_legend=1 00:15:12.106 --rc geninfo_all_blocks=1 00:15:12.106 --rc geninfo_unexecuted_blocks=1 00:15:12.106 00:15:12.106 ' 00:15:12.106 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:12.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.106 --rc genhtml_branch_coverage=1 00:15:12.106 --rc genhtml_function_coverage=1 00:15:12.106 --rc genhtml_legend=1 00:15:12.106 --rc geninfo_all_blocks=1 00:15:12.106 --rc geninfo_unexecuted_blocks=1 00:15:12.106 00:15:12.106 ' 00:15:12.106 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:12.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.106 --rc genhtml_branch_coverage=1 00:15:12.106 --rc genhtml_function_coverage=1 00:15:12.106 --rc genhtml_legend=1 00:15:12.106 --rc geninfo_all_blocks=1 00:15:12.106 --rc geninfo_unexecuted_blocks=1 00:15:12.106 00:15:12.106 ' 00:15:12.106 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:12.106 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:15:12.106 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:12.106 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:12.106 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:12.106 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:12.106 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:12.106 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:12.106 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:12.106 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:12.106 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:12.106 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:12.106 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:15:12.106 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:15:12.106 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:12.106 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:12.106 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:12.106 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:12.106 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:12.106 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:15:12.106 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:12.106 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:12.106 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:12.106 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.106 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.106 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.106 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:15:12.106 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.106 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:15:12.106 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:12.106 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:12.106 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:12.106 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:12.106 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:12.106 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:12.106 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:12.106 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:12.106 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:12.106 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:12.106 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:12.106 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:12.106 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:15:12.106 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:12.106 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:12.106 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:12.106 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:12.106 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:12.106 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:12.106 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:12.106 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:12.106 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:12.106 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:12.106 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:12.106 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:12.106 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:12.106 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:12.106 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:12.106 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:12.107 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:12.107 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:12.107 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:12.107 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:12.107 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:12.107 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:12.107 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:12.107 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:12.107 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:12.107 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:12.107 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:12.107 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:12.107 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:12.107 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:12.107 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:12.107 Cannot find device "nvmf_init_br" 00:15:12.107 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:15:12.107 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:12.366 Cannot find device "nvmf_init_br2" 00:15:12.366 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:15:12.366 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:12.366 Cannot find device "nvmf_tgt_br" 00:15:12.366 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:15:12.366 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:12.366 Cannot find device "nvmf_tgt_br2" 00:15:12.366 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:15:12.366 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:12.366 Cannot find device "nvmf_init_br" 00:15:12.366 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:15:12.366 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:12.366 Cannot find device "nvmf_init_br2" 00:15:12.366 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:15:12.366 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:12.366 Cannot find device "nvmf_tgt_br" 00:15:12.366 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:15:12.366 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:12.366 Cannot find device "nvmf_tgt_br2" 00:15:12.366 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:15:12.366 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:12.366 Cannot find device "nvmf_br" 00:15:12.366 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:15:12.366 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:12.366 Cannot find device "nvmf_init_if" 00:15:12.366 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:15:12.366 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:12.366 Cannot find device "nvmf_init_if2" 00:15:12.366 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:15:12.366 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:12.366 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:12.366 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:15:12.366 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:12.366 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:12.366 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:15:12.366 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:12.366 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:12.366 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:12.366 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:12.366 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:12.366 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:12.625 14:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:12.625 14:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:12.625 14:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:12.625 14:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:12.625 14:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:12.625 14:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:12.625 14:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:12.625 14:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:12.625 14:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:12.625 14:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:12.625 14:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:12.625 14:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:12.625 14:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:12.625 14:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:12.625 14:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:12.625 14:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:12.625 14:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:12.625 14:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:12.625 14:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:12.625 14:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:12.625 14:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:12.625 14:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:12.625 14:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:12.625 14:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:12.625 14:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:12.625 14:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:12.625 14:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:12.625 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:12.625 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.127 ms 00:15:12.625 00:15:12.625 --- 10.0.0.3 ping statistics --- 00:15:12.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:12.625 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:15:12.625 14:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:12.625 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:12.625 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.096 ms 00:15:12.625 00:15:12.625 --- 10.0.0.4 ping statistics --- 00:15:12.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:12.625 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:15:12.625 14:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:12.625 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:12.625 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:15:12.625 00:15:12.625 --- 10.0.0.1 ping statistics --- 00:15:12.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:12.625 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:15:12.625 14:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:12.625 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:12.625 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:15:12.625 00:15:12.625 --- 10.0.0.2 ping statistics --- 00:15:12.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:12.625 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:15:12.625 14:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:12.625 14:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 00:15:12.625 14:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:12.625 14:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:12.625 14:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:12.625 14:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:12.625 14:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:12.625 14:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:12.625 14:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:12.883 14:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:15:12.883 14:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:12.883 14:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:12.883 14:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:12.883 14:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=70314 00:15:12.884 14:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:15:12.884 14:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 70314 00:15:12.884 14:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@833 -- # '[' -z 70314 ']' 00:15:12.884 14:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:12.884 14:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:12.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:12.884 14:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:12.884 14:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:12.884 14:19:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:12.884 [2024-11-06 14:19:40.417731] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:15:12.884 [2024-11-06 14:19:40.417918] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:13.142 [2024-11-06 14:19:40.610170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:13.142 [2024-11-06 14:19:40.763632] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:13.142 [2024-11-06 14:19:40.763693] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:13.142 [2024-11-06 14:19:40.763709] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:13.142 [2024-11-06 14:19:40.763720] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:13.142 [2024-11-06 14:19:40.763733] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:13.142 [2024-11-06 14:19:40.766608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:15:13.142 [2024-11-06 14:19:40.766753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:15:13.142 [2024-11-06 14:19:40.766970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:15:13.142 [2024-11-06 14:19:40.767164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:13.399 [2024-11-06 14:19:41.026668] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:13.656 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:13.656 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@866 -- # return 0 00:15:13.656 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:13.656 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:13.656 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:13.914 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:13.914 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:13.914 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.914 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:13.914 [2024-11-06 14:19:41.331253] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:13.914 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.914 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:13.914 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.914 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:13.914 Malloc0 00:15:13.914 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.914 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:13.914 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.914 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:13.914 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.914 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:13.914 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.914 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:13.914 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.914 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:13.914 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.914 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:13.914 [2024-11-06 14:19:41.501779] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:13.914 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.914 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:15:13.914 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:15:13.914 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:15:13.914 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:15:13.914 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:13.914 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:13.914 { 00:15:13.914 "params": { 00:15:13.914 "name": "Nvme$subsystem", 00:15:13.914 "trtype": "$TEST_TRANSPORT", 00:15:13.914 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:13.914 "adrfam": "ipv4", 00:15:13.914 "trsvcid": "$NVMF_PORT", 00:15:13.914 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:13.914 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:13.914 "hdgst": ${hdgst:-false}, 00:15:13.914 "ddgst": ${ddgst:-false} 00:15:13.914 }, 00:15:13.914 "method": "bdev_nvme_attach_controller" 00:15:13.914 } 00:15:13.914 EOF 00:15:13.914 )") 00:15:13.914 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:15:13.914 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:15:13.914 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:15:13.914 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:15:13.914 "params": { 00:15:13.914 "name": "Nvme1", 00:15:13.914 "trtype": "tcp", 00:15:13.914 "traddr": "10.0.0.3", 00:15:13.914 "adrfam": "ipv4", 00:15:13.914 "trsvcid": "4420", 00:15:13.914 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:13.914 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:13.914 "hdgst": false, 00:15:13.914 "ddgst": false 00:15:13.914 }, 00:15:13.914 "method": "bdev_nvme_attach_controller" 00:15:13.914 }' 00:15:14.172 [2024-11-06 14:19:41.614115] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:15:14.172 [2024-11-06 14:19:41.614240] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70357 ] 00:15:14.172 [2024-11-06 14:19:41.802480] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:14.429 [2024-11-06 14:19:41.966522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:14.429 [2024-11-06 14:19:41.966553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:14.429 [2024-11-06 14:19:41.966557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:14.687 [2024-11-06 14:19:42.235696] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:14.944 I/O targets: 00:15:14.944 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:15:14.944 00:15:14.944 00:15:14.944 CUnit - A unit testing framework for C - Version 2.1-3 00:15:14.944 http://cunit.sourceforge.net/ 00:15:14.944 00:15:14.944 00:15:14.944 Suite: bdevio tests on: Nvme1n1 00:15:14.944 Test: blockdev write read block ...passed 00:15:14.944 Test: blockdev write zeroes read block ...passed 00:15:14.944 Test: blockdev write zeroes read no split ...passed 00:15:14.944 Test: blockdev write zeroes read split ...passed 00:15:14.944 Test: blockdev write zeroes read split partial ...passed 00:15:14.944 Test: blockdev reset ...[2024-11-06 14:19:42.559364] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:15:14.944 [2024-11-06 14:19:42.559546] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b280 (9): Bad file descriptor 00:15:15.202 passed 00:15:15.202 Test: blockdev write read 8 blocks ...[2024-11-06 14:19:42.580440] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:15:15.202 passed 00:15:15.202 Test: blockdev write read size > 128k ...passed 00:15:15.202 Test: blockdev write read invalid size ...passed 00:15:15.202 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:15.202 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:15.202 Test: blockdev write read max offset ...passed 00:15:15.202 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:15.202 Test: blockdev writev readv 8 blocks ...passed 00:15:15.202 Test: blockdev writev readv 30 x 1block ...passed 00:15:15.202 Test: blockdev writev readv block ...passed 00:15:15.202 Test: blockdev writev readv size > 128k ...passed 00:15:15.202 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:15.202 Test: blockdev comparev and writev ...[2024-11-06 14:19:42.591561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:15.202 [2024-11-06 14:19:42.591617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:15.202 [2024-11-06 14:19:42.591648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:15.202 [2024-11-06 14:19:42.591669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:15:15.202 passed 00:15:15.202 Test: blockdev nvme passthru rw ...[2024-11-06 14:19:42.592351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:15.202 [2024-11-06 14:19:42.592388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:15:15.202 [2024-11-06 14:19:42.592411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:15.202 [2024-11-06 14:19:42.592428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:15:15.202 [2024-11-06 14:19:42.592831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:15.202 [2024-11-06 14:19:42.592867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:15:15.202 [2024-11-06 14:19:42.592888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:15.202 [2024-11-06 14:19:42.592907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:15:15.202 [2024-11-06 14:19:42.593314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:15.202 [2024-11-06 14:19:42.593338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:15:15.202 [2024-11-06 14:19:42.593358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:15.202 [2024-11-06 14:19:42.593375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:15:15.202 passed 00:15:15.202 Test: blockdev nvme passthru vendor specific ...passed 00:15:15.202 Test: blockdev nvme admin passthru ...[2024-11-06 14:19:42.594334] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:15.202 [2024-11-06 14:19:42.594373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:15:15.202 [2024-11-06 14:19:42.594495] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:15.202 [2024-11-06 14:19:42.594518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:15:15.202 [2024-11-06 14:19:42.594641] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:15.202 [2024-11-06 14:19:42.594664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:15:15.202 [2024-11-06 14:19:42.594786] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:15.202 [2024-11-06 14:19:42.594809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:15:15.202 passed 00:15:15.202 Test: blockdev copy ...passed 00:15:15.202 00:15:15.202 Run Summary: Type Total Ran Passed Failed Inactive 00:15:15.202 suites 1 1 n/a 0 0 00:15:15.202 tests 23 23 23 0 0 00:15:15.202 asserts 152 152 152 0 n/a 00:15:15.202 00:15:15.202 Elapsed time = 0.351 seconds 00:15:16.593 14:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:16.593 14:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.593 14:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:16.593 14:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.593 14:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:15:16.593 14:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:15:16.593 14:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:16.593 14:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:15:16.593 14:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:16.593 14:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:15:16.593 14:19:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:16.593 14:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:16.593 rmmod nvme_tcp 00:15:16.593 rmmod nvme_fabrics 00:15:16.593 rmmod nvme_keyring 00:15:16.593 14:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:16.593 14:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:15:16.593 14:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:15:16.593 14:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 70314 ']' 00:15:16.593 14:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 70314 00:15:16.593 14:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' -z 70314 ']' 00:15:16.593 14:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # kill -0 70314 00:15:16.593 14:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # uname 00:15:16.593 14:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:16.593 14:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70314 00:15:16.593 killing process with pid 70314 00:15:16.593 14:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:15:16.593 14:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:15:16.593 14:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70314' 00:15:16.593 14:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@971 -- # kill 70314 00:15:16.593 14:19:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@976 -- # wait 70314 00:15:18.493 14:19:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:18.493 14:19:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:18.493 14:19:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:18.493 14:19:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:15:18.493 14:19:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:15:18.493 14:19:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:18.493 14:19:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:15:18.493 14:19:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:18.493 14:19:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:18.493 14:19:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:18.493 14:19:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:18.493 14:19:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:18.493 14:19:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:18.493 14:19:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:18.493 14:19:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:18.493 14:19:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:18.493 14:19:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:18.493 14:19:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:18.493 14:19:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:18.493 14:19:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:18.493 14:19:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:18.493 14:19:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:18.493 14:19:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:18.493 14:19:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:18.493 14:19:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:18.493 14:19:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:18.493 14:19:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:15:18.493 00:15:18.493 real 0m6.644s 00:15:18.493 user 0m23.911s 00:15:18.493 sys 0m1.615s 00:15:18.493 14:19:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:18.493 14:19:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:18.493 ************************************ 00:15:18.493 END TEST nvmf_bdevio 00:15:18.493 ************************************ 00:15:18.493 14:19:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:15:18.493 00:15:18.493 real 3m2.915s 00:15:18.493 user 7m53.894s 00:15:18.493 sys 1m4.619s 00:15:18.493 14:19:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:18.493 14:19:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:15:18.493 ************************************ 00:15:18.493 END TEST nvmf_target_core 00:15:18.493 ************************************ 00:15:18.752 14:19:46 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:15:18.752 14:19:46 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:18.752 14:19:46 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:18.752 14:19:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:18.752 ************************************ 00:15:18.752 START TEST nvmf_target_extra 00:15:18.752 ************************************ 00:15:18.752 14:19:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:15:18.752 * Looking for test storage... 00:15:18.752 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:15:18.752 14:19:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:18.752 14:19:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lcov --version 00:15:18.752 14:19:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:18.752 14:19:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:18.752 14:19:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:18.752 14:19:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:18.752 14:19:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:18.752 14:19:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:15:18.752 14:19:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:15:18.752 14:19:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:15:18.752 14:19:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:15:18.752 14:19:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:15:18.752 14:19:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:15:18.752 14:19:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:15:18.752 14:19:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:18.752 14:19:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:15:18.752 14:19:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:15:18.752 14:19:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:18.752 14:19:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:18.752 14:19:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:15:18.752 14:19:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:15:18.752 14:19:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:18.752 14:19:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:15:18.752 14:19:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:15:18.752 14:19:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:15:18.752 14:19:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:15:18.752 14:19:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:18.752 14:19:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:15:18.752 14:19:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:15:18.752 14:19:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:18.752 14:19:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:18.752 14:19:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:15:18.752 14:19:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:18.752 14:19:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:18.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:18.752 --rc genhtml_branch_coverage=1 00:15:18.752 --rc genhtml_function_coverage=1 00:15:18.752 --rc genhtml_legend=1 00:15:18.752 --rc geninfo_all_blocks=1 00:15:18.752 --rc geninfo_unexecuted_blocks=1 00:15:18.752 00:15:18.752 ' 00:15:18.752 14:19:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:18.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:18.752 --rc genhtml_branch_coverage=1 00:15:18.752 --rc genhtml_function_coverage=1 00:15:18.752 --rc genhtml_legend=1 00:15:18.752 --rc geninfo_all_blocks=1 00:15:18.752 --rc geninfo_unexecuted_blocks=1 00:15:18.752 00:15:18.752 ' 00:15:18.752 14:19:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:18.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:18.752 --rc genhtml_branch_coverage=1 00:15:18.752 --rc genhtml_function_coverage=1 00:15:18.752 --rc genhtml_legend=1 00:15:18.752 --rc geninfo_all_blocks=1 00:15:18.752 --rc geninfo_unexecuted_blocks=1 00:15:18.752 00:15:18.752 ' 00:15:18.752 14:19:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:18.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:18.752 --rc genhtml_branch_coverage=1 00:15:18.752 --rc genhtml_function_coverage=1 00:15:18.752 --rc genhtml_legend=1 00:15:18.752 --rc geninfo_all_blocks=1 00:15:18.752 --rc geninfo_unexecuted_blocks=1 00:15:18.752 00:15:18.752 ' 00:15:18.752 14:19:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:18.752 14:19:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:15:18.752 14:19:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:18.752 14:19:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:18.752 14:19:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:18.752 14:19:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:18.752 14:19:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:18.752 14:19:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:18.752 14:19:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:18.752 14:19:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:18.752 14:19:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:18.753 14:19:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:19.012 14:19:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:15:19.012 14:19:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:15:19.012 14:19:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:19.012 14:19:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:19.012 14:19:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:19.012 14:19:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:19.012 14:19:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:19.012 14:19:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:15:19.012 14:19:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:19.012 14:19:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:19.012 14:19:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:19.012 14:19:46 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.012 14:19:46 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.012 14:19:46 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.012 14:19:46 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:15:19.012 14:19:46 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.012 14:19:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:15:19.012 14:19:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:19.012 14:19:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:19.012 14:19:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:19.012 14:19:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:19.012 14:19:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:19.012 14:19:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:19.012 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:19.012 14:19:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:19.012 14:19:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:19.012 14:19:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:19.012 14:19:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:15:19.012 14:19:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:15:19.012 14:19:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 1 -eq 0 ]] 00:15:19.012 14:19:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:19.012 14:19:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:19.012 14:19:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:19.012 14:19:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:19.012 ************************************ 00:15:19.012 START TEST nvmf_auth_target 00:15:19.012 ************************************ 00:15:19.012 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:19.012 * Looking for test storage... 00:15:19.012 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:19.012 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:19.012 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lcov --version 00:15:19.012 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:19.012 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:19.012 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:19.012 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:19.012 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:19.012 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:15:19.012 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:15:19.012 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:15:19.012 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:15:19.012 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:15:19.012 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:15:19.012 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:15:19.012 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:19.012 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:15:19.012 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:15:19.012 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:19.012 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:19.012 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:15:19.273 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:15:19.273 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:19.273 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:15:19.273 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:15:19.273 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:15:19.273 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:15:19.273 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:19.273 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:15:19.273 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:15:19.273 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:19.273 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:19.273 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:15:19.273 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:19.273 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:19.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:19.273 --rc genhtml_branch_coverage=1 00:15:19.273 --rc genhtml_function_coverage=1 00:15:19.273 --rc genhtml_legend=1 00:15:19.273 --rc geninfo_all_blocks=1 00:15:19.273 --rc geninfo_unexecuted_blocks=1 00:15:19.273 00:15:19.273 ' 00:15:19.273 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:19.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:19.273 --rc genhtml_branch_coverage=1 00:15:19.273 --rc genhtml_function_coverage=1 00:15:19.273 --rc genhtml_legend=1 00:15:19.273 --rc geninfo_all_blocks=1 00:15:19.273 --rc geninfo_unexecuted_blocks=1 00:15:19.273 00:15:19.273 ' 00:15:19.273 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:19.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:19.273 --rc genhtml_branch_coverage=1 00:15:19.273 --rc genhtml_function_coverage=1 00:15:19.273 --rc genhtml_legend=1 00:15:19.273 --rc geninfo_all_blocks=1 00:15:19.273 --rc geninfo_unexecuted_blocks=1 00:15:19.273 00:15:19.273 ' 00:15:19.273 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:19.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:19.273 --rc genhtml_branch_coverage=1 00:15:19.273 --rc genhtml_function_coverage=1 00:15:19.273 --rc genhtml_legend=1 00:15:19.273 --rc geninfo_all_blocks=1 00:15:19.273 --rc geninfo_unexecuted_blocks=1 00:15:19.273 00:15:19.273 ' 00:15:19.273 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:19.273 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:15:19.273 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:19.274 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:19.274 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:19.274 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:19.274 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:19.274 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:19.274 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:19.274 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:19.274 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:19.274 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:19.274 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:15:19.274 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:15:19.274 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:19.274 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:19.274 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:19.274 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:19.274 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:19.274 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:15:19.274 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:19.274 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:19.274 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:19.274 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.274 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.274 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.274 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:15:19.274 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.274 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:15:19.274 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:19.274 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:19.274 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:19.274 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:19.274 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:19.274 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:19.274 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:19.274 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:19.274 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:19.274 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:19.274 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:15:19.274 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:15:19.274 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:15:19.274 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:15:19.274 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:15:19.274 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:15:19.274 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:15:19.274 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:15:19.274 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:19.274 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:19.274 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:19.274 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:19.274 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:19.274 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:19.274 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:19.274 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:19.274 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:19.274 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:19.274 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:19.274 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:19.274 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:19.274 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:19.274 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:19.274 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:19.274 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:19.274 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:19.275 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:19.275 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:19.275 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:19.275 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:19.275 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:19.275 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:19.275 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:19.275 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:19.275 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:19.275 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:19.275 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:19.275 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:19.275 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:19.275 Cannot find device "nvmf_init_br" 00:15:19.275 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:15:19.275 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:19.275 Cannot find device "nvmf_init_br2" 00:15:19.275 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:15:19.275 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:19.275 Cannot find device "nvmf_tgt_br" 00:15:19.275 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:15:19.275 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:19.275 Cannot find device "nvmf_tgt_br2" 00:15:19.275 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:15:19.275 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:19.275 Cannot find device "nvmf_init_br" 00:15:19.275 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:15:19.275 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:19.275 Cannot find device "nvmf_init_br2" 00:15:19.275 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:15:19.275 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:19.275 Cannot find device "nvmf_tgt_br" 00:15:19.275 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:15:19.275 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:19.275 Cannot find device "nvmf_tgt_br2" 00:15:19.275 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:15:19.275 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:19.275 Cannot find device "nvmf_br" 00:15:19.275 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:15:19.275 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:19.275 Cannot find device "nvmf_init_if" 00:15:19.275 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:15:19.275 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:19.275 Cannot find device "nvmf_init_if2" 00:15:19.534 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:15:19.534 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:19.534 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:19.534 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:15:19.534 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:19.534 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:19.534 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:15:19.534 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:19.534 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:19.534 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:19.534 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:19.534 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:19.534 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:19.534 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:19.534 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:19.534 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:19.534 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:19.534 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:19.534 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:19.534 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:19.534 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:19.534 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:19.534 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:19.534 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:19.534 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:19.534 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:19.534 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:19.534 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:19.534 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:19.534 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:19.534 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:19.534 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:19.534 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:19.534 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:19.534 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:19.534 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:19.534 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:19.534 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:19.534 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:19.534 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:19.534 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:19.534 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.134 ms 00:15:19.534 00:15:19.534 --- 10.0.0.3 ping statistics --- 00:15:19.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:19.534 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:15:19.534 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:19.793 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:19.793 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.081 ms 00:15:19.793 00:15:19.793 --- 10.0.0.4 ping statistics --- 00:15:19.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:19.793 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:15:19.793 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:19.793 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:19.793 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:15:19.793 00:15:19.793 --- 10.0.0.1 ping statistics --- 00:15:19.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:19.793 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:15:19.793 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:19.793 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:19.793 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.099 ms 00:15:19.793 00:15:19.793 --- 10.0.0.2 ping statistics --- 00:15:19.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:19.793 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:15:19.794 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:19.794 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@461 -- # return 0 00:15:19.794 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:19.794 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:19.794 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:19.794 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:19.794 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:19.794 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:19.794 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:19.794 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:15:19.794 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:19.794 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:19.794 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.794 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:15:19.794 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=70706 00:15:19.794 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 70706 00:15:19.794 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 70706 ']' 00:15:19.794 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:19.794 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:19.794 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:19.794 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:19.794 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.730 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:20.730 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:15:20.730 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:20.730 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:20.730 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.730 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:20.730 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=70734 00:15:20.730 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:15:20.730 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:15:20.730 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:15:20.730 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:20.730 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:20.730 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:20.730 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:15:20.730 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:20.730 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:20.730 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=20b75349952b028af9b014a63ba9f70c967a6607d1da2b5d 00:15:20.730 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:15:20.730 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.kap 00:15:20.730 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 20b75349952b028af9b014a63ba9f70c967a6607d1da2b5d 0 00:15:20.730 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 20b75349952b028af9b014a63ba9f70c967a6607d1da2b5d 0 00:15:20.730 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:20.730 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:20.730 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=20b75349952b028af9b014a63ba9f70c967a6607d1da2b5d 00:15:20.730 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:15:20.730 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:20.990 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.kap 00:15:20.990 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.kap 00:15:20.990 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.kap 00:15:20.990 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:15:20.990 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:20.990 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:20.990 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:20.990 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:15:20.990 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:15:20.990 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:20.990 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=1e16f86996d77905a2a94f39c84455093a32a93dda8f3a8fd94637bb6357de68 00:15:20.990 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:15:20.990 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.1C8 00:15:20.990 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 1e16f86996d77905a2a94f39c84455093a32a93dda8f3a8fd94637bb6357de68 3 00:15:20.990 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 1e16f86996d77905a2a94f39c84455093a32a93dda8f3a8fd94637bb6357de68 3 00:15:20.990 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:20.990 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:20.990 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=1e16f86996d77905a2a94f39c84455093a32a93dda8f3a8fd94637bb6357de68 00:15:20.990 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:15:20.990 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:20.990 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.1C8 00:15:20.990 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.1C8 00:15:20.990 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.1C8 00:15:20.990 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:15:20.990 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:20.990 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:20.990 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:20.990 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:15:20.990 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:15:20.990 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:20.990 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a902c08f7a7d0fc5d482a096ddff9f26 00:15:20.990 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:15:20.990 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.ifU 00:15:20.990 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a902c08f7a7d0fc5d482a096ddff9f26 1 00:15:20.990 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a902c08f7a7d0fc5d482a096ddff9f26 1 00:15:20.990 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:20.990 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:20.990 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a902c08f7a7d0fc5d482a096ddff9f26 00:15:20.990 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:15:20.990 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:20.990 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.ifU 00:15:20.990 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.ifU 00:15:20.990 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.ifU 00:15:20.990 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:15:20.990 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:20.990 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:20.990 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:20.990 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:15:20.990 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:20.990 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:20.990 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=0a79d22cf4b0306583b8f03844aefd10f3f1c7fe0e44db11 00:15:20.990 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:15:20.990 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.ZZg 00:15:20.990 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 0a79d22cf4b0306583b8f03844aefd10f3f1c7fe0e44db11 2 00:15:20.990 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 0a79d22cf4b0306583b8f03844aefd10f3f1c7fe0e44db11 2 00:15:20.990 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:20.990 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:20.990 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=0a79d22cf4b0306583b8f03844aefd10f3f1c7fe0e44db11 00:15:21.249 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:15:21.250 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:21.250 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.ZZg 00:15:21.250 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.ZZg 00:15:21.250 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.ZZg 00:15:21.250 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:15:21.250 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:21.250 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:21.250 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:21.250 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:15:21.250 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:21.250 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:21.250 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=55cf39d461df8e389eb3635e4309756047449af9b2ffec9f 00:15:21.250 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:15:21.250 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.56O 00:15:21.250 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 55cf39d461df8e389eb3635e4309756047449af9b2ffec9f 2 00:15:21.250 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 55cf39d461df8e389eb3635e4309756047449af9b2ffec9f 2 00:15:21.250 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:21.250 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:21.250 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=55cf39d461df8e389eb3635e4309756047449af9b2ffec9f 00:15:21.250 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:15:21.250 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:21.250 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.56O 00:15:21.250 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.56O 00:15:21.250 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.56O 00:15:21.250 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:15:21.250 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:21.250 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:21.250 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:21.250 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:15:21.250 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:15:21.250 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:21.250 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=ea0243bc482ed5990ab6d9cf2227fde1 00:15:21.250 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:15:21.250 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.b9F 00:15:21.250 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key ea0243bc482ed5990ab6d9cf2227fde1 1 00:15:21.250 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 ea0243bc482ed5990ab6d9cf2227fde1 1 00:15:21.250 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:21.250 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:21.250 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=ea0243bc482ed5990ab6d9cf2227fde1 00:15:21.250 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:15:21.250 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:21.250 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.b9F 00:15:21.250 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.b9F 00:15:21.250 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.b9F 00:15:21.250 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:15:21.250 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:21.250 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:21.250 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:21.250 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:15:21.250 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:15:21.250 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:21.250 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=b60b1886bde3ba6eb74c1d17627adab8d05e3d8d3ca251eaf90764ac7fc2fd3d 00:15:21.250 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:15:21.250 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.PEB 00:15:21.250 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key b60b1886bde3ba6eb74c1d17627adab8d05e3d8d3ca251eaf90764ac7fc2fd3d 3 00:15:21.250 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 b60b1886bde3ba6eb74c1d17627adab8d05e3d8d3ca251eaf90764ac7fc2fd3d 3 00:15:21.250 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:21.250 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:21.250 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=b60b1886bde3ba6eb74c1d17627adab8d05e3d8d3ca251eaf90764ac7fc2fd3d 00:15:21.250 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:15:21.250 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:21.509 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.PEB 00:15:21.509 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.PEB 00:15:21.509 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.PEB 00:15:21.509 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:15:21.509 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 70706 00:15:21.509 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 70706 ']' 00:15:21.509 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:21.509 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:21.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:21.509 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:21.509 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:21.509 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.509 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:21.509 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:15:21.509 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 70734 /var/tmp/host.sock 00:15:21.509 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 70734 ']' 00:15:21.509 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/host.sock 00:15:21.510 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:21.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:21.510 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:21.510 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:21.510 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.078 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:22.078 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:15:22.078 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:15:22.078 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.078 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.078 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.078 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:22.078 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.kap 00:15:22.078 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.078 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.078 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.078 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.kap 00:15:22.078 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.kap 00:15:22.337 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.1C8 ]] 00:15:22.337 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.1C8 00:15:22.337 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.337 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.337 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.337 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.1C8 00:15:22.337 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.1C8 00:15:22.596 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:22.596 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.ifU 00:15:22.596 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.596 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.855 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.855 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.ifU 00:15:22.855 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.ifU 00:15:22.855 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.ZZg ]] 00:15:22.855 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ZZg 00:15:22.855 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.855 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.855 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.855 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ZZg 00:15:22.855 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ZZg 00:15:23.113 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:23.113 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.56O 00:15:23.113 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.113 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.113 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.113 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.56O 00:15:23.113 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.56O 00:15:23.372 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.b9F ]] 00:15:23.372 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.b9F 00:15:23.372 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.372 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.372 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.372 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.b9F 00:15:23.372 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.b9F 00:15:23.631 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:23.631 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.PEB 00:15:23.631 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.631 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.631 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.631 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.PEB 00:15:23.631 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.PEB 00:15:23.890 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:15:23.890 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:23.890 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:23.890 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:23.890 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:23.890 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:24.149 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:15:24.149 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:24.149 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:24.149 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:24.149 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:24.149 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:24.149 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:24.149 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.149 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.149 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.149 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:24.149 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:24.149 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:24.408 00:15:24.408 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:24.408 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:24.408 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:24.666 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:24.666 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:24.666 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.666 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.666 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.666 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:24.666 { 00:15:24.666 "cntlid": 1, 00:15:24.666 "qid": 0, 00:15:24.666 "state": "enabled", 00:15:24.666 "thread": "nvmf_tgt_poll_group_000", 00:15:24.666 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0", 00:15:24.666 "listen_address": { 00:15:24.666 "trtype": "TCP", 00:15:24.666 "adrfam": "IPv4", 00:15:24.666 "traddr": "10.0.0.3", 00:15:24.666 "trsvcid": "4420" 00:15:24.666 }, 00:15:24.666 "peer_address": { 00:15:24.666 "trtype": "TCP", 00:15:24.666 "adrfam": "IPv4", 00:15:24.666 "traddr": "10.0.0.1", 00:15:24.666 "trsvcid": "55544" 00:15:24.666 }, 00:15:24.666 "auth": { 00:15:24.666 "state": "completed", 00:15:24.666 "digest": "sha256", 00:15:24.666 "dhgroup": "null" 00:15:24.666 } 00:15:24.666 } 00:15:24.666 ]' 00:15:24.666 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:24.666 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:24.666 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:24.666 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:24.666 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:24.666 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:24.666 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:24.666 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:24.926 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjBiNzUzNDk5NTJiMDI4YWY5YjAxNGE2M2JhOWY3MGM5NjdhNjYwN2QxZGEyYjVk1hdk9g==: --dhchap-ctrl-secret DHHC-1:03:MWUxNmY4Njk5NmQ3NzkwNWEyYTk0ZjM5Yzg0NDU1MDkzYTMyYTkzZGRhOGYzYThmZDk0NjM3YmI2MzU3ZGU2OKKEk9A=: 00:15:24.926 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid 406d54d0-5e94-472a-a2b3-4291f3ac81e0 -l 0 --dhchap-secret DHHC-1:00:MjBiNzUzNDk5NTJiMDI4YWY5YjAxNGE2M2JhOWY3MGM5NjdhNjYwN2QxZGEyYjVk1hdk9g==: --dhchap-ctrl-secret DHHC-1:03:MWUxNmY4Njk5NmQ3NzkwNWEyYTk0ZjM5Yzg0NDU1MDkzYTMyYTkzZGRhOGYzYThmZDk0NjM3YmI2MzU3ZGU2OKKEk9A=: 00:15:29.116 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:29.116 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:29.116 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:15:29.116 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.116 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.116 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.116 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:29.116 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:29.116 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:29.116 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:15:29.116 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:29.116 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:29.116 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:29.116 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:29.116 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:29.116 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:29.116 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.117 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.117 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.117 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:29.117 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:29.117 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:29.376 00:15:29.376 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:29.376 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:29.376 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:29.635 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:29.635 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:29.635 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.635 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.635 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.635 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:29.635 { 00:15:29.635 "cntlid": 3, 00:15:29.635 "qid": 0, 00:15:29.635 "state": "enabled", 00:15:29.635 "thread": "nvmf_tgt_poll_group_000", 00:15:29.635 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0", 00:15:29.635 "listen_address": { 00:15:29.635 "trtype": "TCP", 00:15:29.635 "adrfam": "IPv4", 00:15:29.635 "traddr": "10.0.0.3", 00:15:29.635 "trsvcid": "4420" 00:15:29.635 }, 00:15:29.635 "peer_address": { 00:15:29.635 "trtype": "TCP", 00:15:29.635 "adrfam": "IPv4", 00:15:29.635 "traddr": "10.0.0.1", 00:15:29.635 "trsvcid": "55572" 00:15:29.635 }, 00:15:29.635 "auth": { 00:15:29.635 "state": "completed", 00:15:29.635 "digest": "sha256", 00:15:29.635 "dhgroup": "null" 00:15:29.635 } 00:15:29.635 } 00:15:29.635 ]' 00:15:29.635 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:29.635 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:29.635 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:29.635 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:29.635 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:29.894 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:29.894 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:29.894 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:29.894 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTkwMmMwOGY3YTdkMGZjNWQ0ODJhMDk2ZGRmZjlmMjZ64U1I: --dhchap-ctrl-secret DHHC-1:02:MGE3OWQyMmNmNGIwMzA2NTgzYjhmMDM4NDRhZWZkMTBmM2YxYzdmZTBlNDRkYjExHkTEaw==: 00:15:29.894 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid 406d54d0-5e94-472a-a2b3-4291f3ac81e0 -l 0 --dhchap-secret DHHC-1:01:YTkwMmMwOGY3YTdkMGZjNWQ0ODJhMDk2ZGRmZjlmMjZ64U1I: --dhchap-ctrl-secret DHHC-1:02:MGE3OWQyMmNmNGIwMzA2NTgzYjhmMDM4NDRhZWZkMTBmM2YxYzdmZTBlNDRkYjExHkTEaw==: 00:15:30.831 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:30.831 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:30.831 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:15:30.831 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.831 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.831 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.831 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:30.831 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:30.831 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:30.831 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:15:30.831 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:30.831 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:30.831 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:30.831 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:30.831 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:30.831 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:30.831 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.831 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.831 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.831 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:30.831 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:30.831 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:31.090 00:15:31.090 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:31.090 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:31.090 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:31.347 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:31.348 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:31.348 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.348 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.348 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.348 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:31.348 { 00:15:31.348 "cntlid": 5, 00:15:31.348 "qid": 0, 00:15:31.348 "state": "enabled", 00:15:31.348 "thread": "nvmf_tgt_poll_group_000", 00:15:31.348 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0", 00:15:31.348 "listen_address": { 00:15:31.348 "trtype": "TCP", 00:15:31.348 "adrfam": "IPv4", 00:15:31.348 "traddr": "10.0.0.3", 00:15:31.348 "trsvcid": "4420" 00:15:31.348 }, 00:15:31.348 "peer_address": { 00:15:31.348 "trtype": "TCP", 00:15:31.348 "adrfam": "IPv4", 00:15:31.348 "traddr": "10.0.0.1", 00:15:31.348 "trsvcid": "55592" 00:15:31.348 }, 00:15:31.348 "auth": { 00:15:31.348 "state": "completed", 00:15:31.348 "digest": "sha256", 00:15:31.348 "dhgroup": "null" 00:15:31.348 } 00:15:31.348 } 00:15:31.348 ]' 00:15:31.348 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:31.607 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:31.607 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:31.607 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:31.607 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:31.607 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:31.607 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:31.607 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:31.866 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTVjZjM5ZDQ2MWRmOGUzODllYjM2MzVlNDMwOTc1NjA0NzQ0OWFmOWIyZmZlYzlmWliPlg==: --dhchap-ctrl-secret DHHC-1:01:ZWEwMjQzYmM0ODJlZDU5OTBhYjZkOWNmMjIyN2ZkZTGcv5Bp: 00:15:31.866 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid 406d54d0-5e94-472a-a2b3-4291f3ac81e0 -l 0 --dhchap-secret DHHC-1:02:NTVjZjM5ZDQ2MWRmOGUzODllYjM2MzVlNDMwOTc1NjA0NzQ0OWFmOWIyZmZlYzlmWliPlg==: --dhchap-ctrl-secret DHHC-1:01:ZWEwMjQzYmM0ODJlZDU5OTBhYjZkOWNmMjIyN2ZkZTGcv5Bp: 00:15:32.440 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:32.440 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:32.440 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:15:32.440 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.440 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.440 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.440 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:32.440 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:32.440 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:32.699 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:15:32.699 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:32.699 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:32.699 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:32.699 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:32.699 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:32.699 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --dhchap-key key3 00:15:32.699 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.699 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.699 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.699 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:32.699 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:32.699 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:32.958 00:15:33.216 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:33.216 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:33.216 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:33.216 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:33.216 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:33.216 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.216 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.474 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.474 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:33.474 { 00:15:33.474 "cntlid": 7, 00:15:33.474 "qid": 0, 00:15:33.474 "state": "enabled", 00:15:33.474 "thread": "nvmf_tgt_poll_group_000", 00:15:33.474 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0", 00:15:33.475 "listen_address": { 00:15:33.475 "trtype": "TCP", 00:15:33.475 "adrfam": "IPv4", 00:15:33.475 "traddr": "10.0.0.3", 00:15:33.475 "trsvcid": "4420" 00:15:33.475 }, 00:15:33.475 "peer_address": { 00:15:33.475 "trtype": "TCP", 00:15:33.475 "adrfam": "IPv4", 00:15:33.475 "traddr": "10.0.0.1", 00:15:33.475 "trsvcid": "55616" 00:15:33.475 }, 00:15:33.475 "auth": { 00:15:33.475 "state": "completed", 00:15:33.475 "digest": "sha256", 00:15:33.475 "dhgroup": "null" 00:15:33.475 } 00:15:33.475 } 00:15:33.475 ]' 00:15:33.475 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:33.475 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:33.475 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:33.475 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:33.475 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:33.475 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:33.475 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:33.475 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:33.733 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjYwYjE4ODZiZGUzYmE2ZWI3NGMxZDE3NjI3YWRhYjhkMDVlM2Q4ZDNjYTI1MWVhZjkwNzY0YWM3ZmMyZmQzZImPBGY=: 00:15:33.734 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid 406d54d0-5e94-472a-a2b3-4291f3ac81e0 -l 0 --dhchap-secret DHHC-1:03:YjYwYjE4ODZiZGUzYmE2ZWI3NGMxZDE3NjI3YWRhYjhkMDVlM2Q4ZDNjYTI1MWVhZjkwNzY0YWM3ZmMyZmQzZImPBGY=: 00:15:34.300 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:34.300 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:34.300 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:15:34.300 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.300 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.300 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.300 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:34.300 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:34.300 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:34.300 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:34.558 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:15:34.558 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:34.558 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:34.558 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:34.558 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:34.558 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:34.558 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:34.559 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.559 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.559 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.559 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:34.559 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:34.559 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:34.817 00:15:35.075 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:35.075 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:35.075 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:35.075 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:35.075 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:35.075 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.075 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.334 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.334 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:35.334 { 00:15:35.334 "cntlid": 9, 00:15:35.334 "qid": 0, 00:15:35.334 "state": "enabled", 00:15:35.334 "thread": "nvmf_tgt_poll_group_000", 00:15:35.334 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0", 00:15:35.334 "listen_address": { 00:15:35.334 "trtype": "TCP", 00:15:35.334 "adrfam": "IPv4", 00:15:35.334 "traddr": "10.0.0.3", 00:15:35.334 "trsvcid": "4420" 00:15:35.334 }, 00:15:35.334 "peer_address": { 00:15:35.334 "trtype": "TCP", 00:15:35.334 "adrfam": "IPv4", 00:15:35.334 "traddr": "10.0.0.1", 00:15:35.334 "trsvcid": "59614" 00:15:35.334 }, 00:15:35.334 "auth": { 00:15:35.334 "state": "completed", 00:15:35.334 "digest": "sha256", 00:15:35.334 "dhgroup": "ffdhe2048" 00:15:35.334 } 00:15:35.334 } 00:15:35.334 ]' 00:15:35.334 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:35.334 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:35.334 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:35.334 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:35.334 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:35.334 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:35.334 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:35.334 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:35.593 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjBiNzUzNDk5NTJiMDI4YWY5YjAxNGE2M2JhOWY3MGM5NjdhNjYwN2QxZGEyYjVk1hdk9g==: --dhchap-ctrl-secret DHHC-1:03:MWUxNmY4Njk5NmQ3NzkwNWEyYTk0ZjM5Yzg0NDU1MDkzYTMyYTkzZGRhOGYzYThmZDk0NjM3YmI2MzU3ZGU2OKKEk9A=: 00:15:35.593 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid 406d54d0-5e94-472a-a2b3-4291f3ac81e0 -l 0 --dhchap-secret DHHC-1:00:MjBiNzUzNDk5NTJiMDI4YWY5YjAxNGE2M2JhOWY3MGM5NjdhNjYwN2QxZGEyYjVk1hdk9g==: --dhchap-ctrl-secret DHHC-1:03:MWUxNmY4Njk5NmQ3NzkwNWEyYTk0ZjM5Yzg0NDU1MDkzYTMyYTkzZGRhOGYzYThmZDk0NjM3YmI2MzU3ZGU2OKKEk9A=: 00:15:36.158 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:36.158 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:36.158 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:15:36.158 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.158 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.158 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.158 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:36.158 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:36.158 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:36.416 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:15:36.416 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:36.416 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:36.417 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:36.417 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:36.417 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:36.417 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:36.417 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.417 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.417 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.417 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:36.417 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:36.417 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:36.675 00:15:36.933 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:36.933 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:36.933 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:36.933 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:36.933 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:36.933 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.933 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.193 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.193 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:37.193 { 00:15:37.193 "cntlid": 11, 00:15:37.193 "qid": 0, 00:15:37.193 "state": "enabled", 00:15:37.193 "thread": "nvmf_tgt_poll_group_000", 00:15:37.193 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0", 00:15:37.193 "listen_address": { 00:15:37.193 "trtype": "TCP", 00:15:37.193 "adrfam": "IPv4", 00:15:37.193 "traddr": "10.0.0.3", 00:15:37.193 "trsvcid": "4420" 00:15:37.193 }, 00:15:37.193 "peer_address": { 00:15:37.193 "trtype": "TCP", 00:15:37.193 "adrfam": "IPv4", 00:15:37.193 "traddr": "10.0.0.1", 00:15:37.193 "trsvcid": "59642" 00:15:37.193 }, 00:15:37.193 "auth": { 00:15:37.193 "state": "completed", 00:15:37.193 "digest": "sha256", 00:15:37.193 "dhgroup": "ffdhe2048" 00:15:37.193 } 00:15:37.193 } 00:15:37.193 ]' 00:15:37.193 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:37.193 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:37.193 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:37.193 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:37.193 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:37.193 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:37.193 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:37.193 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:37.452 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTkwMmMwOGY3YTdkMGZjNWQ0ODJhMDk2ZGRmZjlmMjZ64U1I: --dhchap-ctrl-secret DHHC-1:02:MGE3OWQyMmNmNGIwMzA2NTgzYjhmMDM4NDRhZWZkMTBmM2YxYzdmZTBlNDRkYjExHkTEaw==: 00:15:37.452 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid 406d54d0-5e94-472a-a2b3-4291f3ac81e0 -l 0 --dhchap-secret DHHC-1:01:YTkwMmMwOGY3YTdkMGZjNWQ0ODJhMDk2ZGRmZjlmMjZ64U1I: --dhchap-ctrl-secret DHHC-1:02:MGE3OWQyMmNmNGIwMzA2NTgzYjhmMDM4NDRhZWZkMTBmM2YxYzdmZTBlNDRkYjExHkTEaw==: 00:15:38.020 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:38.020 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:38.020 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:15:38.020 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.020 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.020 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.020 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:38.020 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:38.020 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:38.280 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:15:38.280 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:38.280 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:38.280 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:38.280 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:38.280 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:38.280 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:38.280 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.280 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.280 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.280 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:38.280 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:38.280 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:38.538 00:15:38.797 14:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:38.797 14:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:38.797 14:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:39.056 14:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.056 14:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:39.056 14:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.056 14:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.056 14:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.056 14:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:39.056 { 00:15:39.056 "cntlid": 13, 00:15:39.056 "qid": 0, 00:15:39.056 "state": "enabled", 00:15:39.056 "thread": "nvmf_tgt_poll_group_000", 00:15:39.056 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0", 00:15:39.056 "listen_address": { 00:15:39.056 "trtype": "TCP", 00:15:39.056 "adrfam": "IPv4", 00:15:39.056 "traddr": "10.0.0.3", 00:15:39.056 "trsvcid": "4420" 00:15:39.056 }, 00:15:39.056 "peer_address": { 00:15:39.056 "trtype": "TCP", 00:15:39.056 "adrfam": "IPv4", 00:15:39.056 "traddr": "10.0.0.1", 00:15:39.056 "trsvcid": "59674" 00:15:39.056 }, 00:15:39.056 "auth": { 00:15:39.056 "state": "completed", 00:15:39.056 "digest": "sha256", 00:15:39.056 "dhgroup": "ffdhe2048" 00:15:39.056 } 00:15:39.056 } 00:15:39.056 ]' 00:15:39.056 14:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:39.056 14:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:39.056 14:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:39.056 14:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:39.056 14:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:39.315 14:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:39.315 14:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:39.315 14:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:39.315 14:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTVjZjM5ZDQ2MWRmOGUzODllYjM2MzVlNDMwOTc1NjA0NzQ0OWFmOWIyZmZlYzlmWliPlg==: --dhchap-ctrl-secret DHHC-1:01:ZWEwMjQzYmM0ODJlZDU5OTBhYjZkOWNmMjIyN2ZkZTGcv5Bp: 00:15:39.315 14:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid 406d54d0-5e94-472a-a2b3-4291f3ac81e0 -l 0 --dhchap-secret DHHC-1:02:NTVjZjM5ZDQ2MWRmOGUzODllYjM2MzVlNDMwOTc1NjA0NzQ0OWFmOWIyZmZlYzlmWliPlg==: --dhchap-ctrl-secret DHHC-1:01:ZWEwMjQzYmM0ODJlZDU5OTBhYjZkOWNmMjIyN2ZkZTGcv5Bp: 00:15:40.251 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:40.251 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:40.251 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:15:40.251 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.251 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.251 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.251 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:40.251 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:40.251 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:40.251 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:15:40.251 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:40.251 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:40.251 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:40.251 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:40.251 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:40.251 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --dhchap-key key3 00:15:40.251 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.251 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.251 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.251 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:40.251 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:40.251 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:40.510 00:15:40.510 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:40.510 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:40.510 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:40.769 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:40.769 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:40.769 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.769 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.028 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.028 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:41.028 { 00:15:41.028 "cntlid": 15, 00:15:41.028 "qid": 0, 00:15:41.028 "state": "enabled", 00:15:41.028 "thread": "nvmf_tgt_poll_group_000", 00:15:41.028 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0", 00:15:41.028 "listen_address": { 00:15:41.028 "trtype": "TCP", 00:15:41.028 "adrfam": "IPv4", 00:15:41.028 "traddr": "10.0.0.3", 00:15:41.028 "trsvcid": "4420" 00:15:41.028 }, 00:15:41.028 "peer_address": { 00:15:41.028 "trtype": "TCP", 00:15:41.028 "adrfam": "IPv4", 00:15:41.028 "traddr": "10.0.0.1", 00:15:41.028 "trsvcid": "59698" 00:15:41.028 }, 00:15:41.028 "auth": { 00:15:41.028 "state": "completed", 00:15:41.028 "digest": "sha256", 00:15:41.028 "dhgroup": "ffdhe2048" 00:15:41.028 } 00:15:41.028 } 00:15:41.028 ]' 00:15:41.028 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:41.028 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:41.028 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:41.028 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:41.028 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:41.028 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:41.028 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:41.028 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:41.287 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjYwYjE4ODZiZGUzYmE2ZWI3NGMxZDE3NjI3YWRhYjhkMDVlM2Q4ZDNjYTI1MWVhZjkwNzY0YWM3ZmMyZmQzZImPBGY=: 00:15:41.287 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid 406d54d0-5e94-472a-a2b3-4291f3ac81e0 -l 0 --dhchap-secret DHHC-1:03:YjYwYjE4ODZiZGUzYmE2ZWI3NGMxZDE3NjI3YWRhYjhkMDVlM2Q4ZDNjYTI1MWVhZjkwNzY0YWM3ZmMyZmQzZImPBGY=: 00:15:41.854 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:41.854 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:41.854 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:15:41.854 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.854 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.854 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.854 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:41.854 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:41.854 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:41.854 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:42.113 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:15:42.113 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:42.113 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:42.113 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:42.113 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:42.113 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:42.113 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:42.113 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.113 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.113 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.113 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:42.113 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:42.113 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:42.371 00:15:42.371 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:42.371 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:42.371 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:42.630 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:42.630 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:42.630 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.630 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.630 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.630 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:42.630 { 00:15:42.630 "cntlid": 17, 00:15:42.630 "qid": 0, 00:15:42.630 "state": "enabled", 00:15:42.630 "thread": "nvmf_tgt_poll_group_000", 00:15:42.630 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0", 00:15:42.630 "listen_address": { 00:15:42.630 "trtype": "TCP", 00:15:42.630 "adrfam": "IPv4", 00:15:42.630 "traddr": "10.0.0.3", 00:15:42.630 "trsvcid": "4420" 00:15:42.630 }, 00:15:42.630 "peer_address": { 00:15:42.630 "trtype": "TCP", 00:15:42.630 "adrfam": "IPv4", 00:15:42.630 "traddr": "10.0.0.1", 00:15:42.630 "trsvcid": "59722" 00:15:42.630 }, 00:15:42.630 "auth": { 00:15:42.630 "state": "completed", 00:15:42.630 "digest": "sha256", 00:15:42.630 "dhgroup": "ffdhe3072" 00:15:42.630 } 00:15:42.630 } 00:15:42.630 ]' 00:15:42.630 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:42.889 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:42.889 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:42.889 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:42.889 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:42.889 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:42.889 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:42.889 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:43.148 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjBiNzUzNDk5NTJiMDI4YWY5YjAxNGE2M2JhOWY3MGM5NjdhNjYwN2QxZGEyYjVk1hdk9g==: --dhchap-ctrl-secret DHHC-1:03:MWUxNmY4Njk5NmQ3NzkwNWEyYTk0ZjM5Yzg0NDU1MDkzYTMyYTkzZGRhOGYzYThmZDk0NjM3YmI2MzU3ZGU2OKKEk9A=: 00:15:43.148 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid 406d54d0-5e94-472a-a2b3-4291f3ac81e0 -l 0 --dhchap-secret DHHC-1:00:MjBiNzUzNDk5NTJiMDI4YWY5YjAxNGE2M2JhOWY3MGM5NjdhNjYwN2QxZGEyYjVk1hdk9g==: --dhchap-ctrl-secret DHHC-1:03:MWUxNmY4Njk5NmQ3NzkwNWEyYTk0ZjM5Yzg0NDU1MDkzYTMyYTkzZGRhOGYzYThmZDk0NjM3YmI2MzU3ZGU2OKKEk9A=: 00:15:43.714 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:43.714 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:43.714 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:15:43.715 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.715 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.715 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.715 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:43.715 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:43.715 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:43.973 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:15:43.973 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:43.973 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:43.973 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:43.973 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:43.973 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:43.973 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:43.973 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.973 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.973 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.973 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:43.973 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:43.973 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:44.231 00:15:44.490 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:44.490 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:44.490 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:44.490 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:44.490 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:44.490 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.490 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.749 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.749 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:44.749 { 00:15:44.749 "cntlid": 19, 00:15:44.749 "qid": 0, 00:15:44.749 "state": "enabled", 00:15:44.749 "thread": "nvmf_tgt_poll_group_000", 00:15:44.749 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0", 00:15:44.749 "listen_address": { 00:15:44.749 "trtype": "TCP", 00:15:44.749 "adrfam": "IPv4", 00:15:44.749 "traddr": "10.0.0.3", 00:15:44.749 "trsvcid": "4420" 00:15:44.749 }, 00:15:44.749 "peer_address": { 00:15:44.749 "trtype": "TCP", 00:15:44.749 "adrfam": "IPv4", 00:15:44.749 "traddr": "10.0.0.1", 00:15:44.749 "trsvcid": "33078" 00:15:44.749 }, 00:15:44.749 "auth": { 00:15:44.749 "state": "completed", 00:15:44.749 "digest": "sha256", 00:15:44.749 "dhgroup": "ffdhe3072" 00:15:44.749 } 00:15:44.749 } 00:15:44.749 ]' 00:15:44.749 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:44.749 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:44.749 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:44.749 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:44.749 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:44.749 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:44.749 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:44.749 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:45.007 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTkwMmMwOGY3YTdkMGZjNWQ0ODJhMDk2ZGRmZjlmMjZ64U1I: --dhchap-ctrl-secret DHHC-1:02:MGE3OWQyMmNmNGIwMzA2NTgzYjhmMDM4NDRhZWZkMTBmM2YxYzdmZTBlNDRkYjExHkTEaw==: 00:15:45.008 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid 406d54d0-5e94-472a-a2b3-4291f3ac81e0 -l 0 --dhchap-secret DHHC-1:01:YTkwMmMwOGY3YTdkMGZjNWQ0ODJhMDk2ZGRmZjlmMjZ64U1I: --dhchap-ctrl-secret DHHC-1:02:MGE3OWQyMmNmNGIwMzA2NTgzYjhmMDM4NDRhZWZkMTBmM2YxYzdmZTBlNDRkYjExHkTEaw==: 00:15:45.576 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:45.576 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:45.576 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:15:45.576 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.576 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.576 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.576 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:45.576 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:45.576 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:45.835 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:15:45.835 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:45.835 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:45.835 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:45.835 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:45.835 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:45.835 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:45.835 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.835 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.835 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.835 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:45.835 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:45.835 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:46.094 00:15:46.094 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:46.352 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:46.352 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:46.352 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:46.352 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:46.352 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.352 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.352 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.352 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:46.352 { 00:15:46.352 "cntlid": 21, 00:15:46.352 "qid": 0, 00:15:46.352 "state": "enabled", 00:15:46.352 "thread": "nvmf_tgt_poll_group_000", 00:15:46.352 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0", 00:15:46.352 "listen_address": { 00:15:46.352 "trtype": "TCP", 00:15:46.352 "adrfam": "IPv4", 00:15:46.352 "traddr": "10.0.0.3", 00:15:46.352 "trsvcid": "4420" 00:15:46.352 }, 00:15:46.352 "peer_address": { 00:15:46.352 "trtype": "TCP", 00:15:46.352 "adrfam": "IPv4", 00:15:46.352 "traddr": "10.0.0.1", 00:15:46.352 "trsvcid": "33098" 00:15:46.352 }, 00:15:46.352 "auth": { 00:15:46.352 "state": "completed", 00:15:46.352 "digest": "sha256", 00:15:46.352 "dhgroup": "ffdhe3072" 00:15:46.352 } 00:15:46.352 } 00:15:46.352 ]' 00:15:46.352 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:46.611 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:46.611 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:46.611 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:46.611 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:46.611 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:46.611 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:46.611 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:46.869 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTVjZjM5ZDQ2MWRmOGUzODllYjM2MzVlNDMwOTc1NjA0NzQ0OWFmOWIyZmZlYzlmWliPlg==: --dhchap-ctrl-secret DHHC-1:01:ZWEwMjQzYmM0ODJlZDU5OTBhYjZkOWNmMjIyN2ZkZTGcv5Bp: 00:15:46.869 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid 406d54d0-5e94-472a-a2b3-4291f3ac81e0 -l 0 --dhchap-secret DHHC-1:02:NTVjZjM5ZDQ2MWRmOGUzODllYjM2MzVlNDMwOTc1NjA0NzQ0OWFmOWIyZmZlYzlmWliPlg==: --dhchap-ctrl-secret DHHC-1:01:ZWEwMjQzYmM0ODJlZDU5OTBhYjZkOWNmMjIyN2ZkZTGcv5Bp: 00:15:47.440 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:47.440 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:47.440 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:15:47.440 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.440 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.440 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.440 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:47.440 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:47.440 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:47.700 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:15:47.700 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:47.700 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:47.700 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:47.700 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:47.700 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:47.700 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --dhchap-key key3 00:15:47.700 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.700 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.700 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.700 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:47.700 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:47.700 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:47.960 00:15:48.218 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:48.218 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:48.218 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:48.218 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:48.218 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:48.218 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.218 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.218 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.218 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:48.218 { 00:15:48.218 "cntlid": 23, 00:15:48.218 "qid": 0, 00:15:48.218 "state": "enabled", 00:15:48.218 "thread": "nvmf_tgt_poll_group_000", 00:15:48.218 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0", 00:15:48.218 "listen_address": { 00:15:48.218 "trtype": "TCP", 00:15:48.218 "adrfam": "IPv4", 00:15:48.218 "traddr": "10.0.0.3", 00:15:48.218 "trsvcid": "4420" 00:15:48.218 }, 00:15:48.218 "peer_address": { 00:15:48.218 "trtype": "TCP", 00:15:48.218 "adrfam": "IPv4", 00:15:48.218 "traddr": "10.0.0.1", 00:15:48.218 "trsvcid": "33134" 00:15:48.218 }, 00:15:48.218 "auth": { 00:15:48.218 "state": "completed", 00:15:48.218 "digest": "sha256", 00:15:48.218 "dhgroup": "ffdhe3072" 00:15:48.218 } 00:15:48.218 } 00:15:48.218 ]' 00:15:48.219 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:48.477 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:48.477 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:48.477 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:48.477 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:48.477 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:48.477 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:48.477 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:48.736 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjYwYjE4ODZiZGUzYmE2ZWI3NGMxZDE3NjI3YWRhYjhkMDVlM2Q4ZDNjYTI1MWVhZjkwNzY0YWM3ZmMyZmQzZImPBGY=: 00:15:48.736 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid 406d54d0-5e94-472a-a2b3-4291f3ac81e0 -l 0 --dhchap-secret DHHC-1:03:YjYwYjE4ODZiZGUzYmE2ZWI3NGMxZDE3NjI3YWRhYjhkMDVlM2Q4ZDNjYTI1MWVhZjkwNzY0YWM3ZmMyZmQzZImPBGY=: 00:15:49.304 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:49.304 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:49.304 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:15:49.304 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.304 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.304 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.304 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:49.304 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:49.304 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:49.304 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:49.564 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:15:49.564 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:49.564 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:49.564 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:49.564 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:49.564 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:49.564 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:49.564 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.564 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.564 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.564 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:49.564 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:49.564 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:49.823 00:15:50.082 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:50.082 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:50.082 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:50.340 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:50.340 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:50.340 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.340 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.340 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.340 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:50.340 { 00:15:50.340 "cntlid": 25, 00:15:50.340 "qid": 0, 00:15:50.340 "state": "enabled", 00:15:50.340 "thread": "nvmf_tgt_poll_group_000", 00:15:50.340 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0", 00:15:50.340 "listen_address": { 00:15:50.340 "trtype": "TCP", 00:15:50.340 "adrfam": "IPv4", 00:15:50.340 "traddr": "10.0.0.3", 00:15:50.340 "trsvcid": "4420" 00:15:50.340 }, 00:15:50.340 "peer_address": { 00:15:50.340 "trtype": "TCP", 00:15:50.340 "adrfam": "IPv4", 00:15:50.340 "traddr": "10.0.0.1", 00:15:50.340 "trsvcid": "33166" 00:15:50.340 }, 00:15:50.340 "auth": { 00:15:50.340 "state": "completed", 00:15:50.340 "digest": "sha256", 00:15:50.340 "dhgroup": "ffdhe4096" 00:15:50.340 } 00:15:50.340 } 00:15:50.340 ]' 00:15:50.340 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:50.340 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:50.340 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:50.340 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:50.340 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:50.340 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:50.340 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:50.340 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:50.613 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjBiNzUzNDk5NTJiMDI4YWY5YjAxNGE2M2JhOWY3MGM5NjdhNjYwN2QxZGEyYjVk1hdk9g==: --dhchap-ctrl-secret DHHC-1:03:MWUxNmY4Njk5NmQ3NzkwNWEyYTk0ZjM5Yzg0NDU1MDkzYTMyYTkzZGRhOGYzYThmZDk0NjM3YmI2MzU3ZGU2OKKEk9A=: 00:15:50.613 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid 406d54d0-5e94-472a-a2b3-4291f3ac81e0 -l 0 --dhchap-secret DHHC-1:00:MjBiNzUzNDk5NTJiMDI4YWY5YjAxNGE2M2JhOWY3MGM5NjdhNjYwN2QxZGEyYjVk1hdk9g==: --dhchap-ctrl-secret DHHC-1:03:MWUxNmY4Njk5NmQ3NzkwNWEyYTk0ZjM5Yzg0NDU1MDkzYTMyYTkzZGRhOGYzYThmZDk0NjM3YmI2MzU3ZGU2OKKEk9A=: 00:15:51.200 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:51.200 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:51.200 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:15:51.200 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.200 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.200 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.200 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:51.200 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:51.200 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:51.459 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:15:51.459 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:51.459 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:51.459 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:51.459 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:51.459 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:51.459 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:51.459 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.459 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.459 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.459 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:51.459 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:51.459 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:51.718 00:15:51.976 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:51.976 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:51.976 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:52.233 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:52.233 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:52.233 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.233 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.233 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.233 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:52.233 { 00:15:52.233 "cntlid": 27, 00:15:52.233 "qid": 0, 00:15:52.233 "state": "enabled", 00:15:52.233 "thread": "nvmf_tgt_poll_group_000", 00:15:52.233 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0", 00:15:52.233 "listen_address": { 00:15:52.233 "trtype": "TCP", 00:15:52.233 "adrfam": "IPv4", 00:15:52.233 "traddr": "10.0.0.3", 00:15:52.233 "trsvcid": "4420" 00:15:52.233 }, 00:15:52.233 "peer_address": { 00:15:52.233 "trtype": "TCP", 00:15:52.233 "adrfam": "IPv4", 00:15:52.233 "traddr": "10.0.0.1", 00:15:52.233 "trsvcid": "33188" 00:15:52.233 }, 00:15:52.233 "auth": { 00:15:52.233 "state": "completed", 00:15:52.233 "digest": "sha256", 00:15:52.233 "dhgroup": "ffdhe4096" 00:15:52.233 } 00:15:52.233 } 00:15:52.233 ]' 00:15:52.233 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:52.233 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:52.233 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:52.233 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:52.233 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:52.233 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:52.233 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:52.233 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:52.491 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTkwMmMwOGY3YTdkMGZjNWQ0ODJhMDk2ZGRmZjlmMjZ64U1I: --dhchap-ctrl-secret DHHC-1:02:MGE3OWQyMmNmNGIwMzA2NTgzYjhmMDM4NDRhZWZkMTBmM2YxYzdmZTBlNDRkYjExHkTEaw==: 00:15:52.491 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid 406d54d0-5e94-472a-a2b3-4291f3ac81e0 -l 0 --dhchap-secret DHHC-1:01:YTkwMmMwOGY3YTdkMGZjNWQ0ODJhMDk2ZGRmZjlmMjZ64U1I: --dhchap-ctrl-secret DHHC-1:02:MGE3OWQyMmNmNGIwMzA2NTgzYjhmMDM4NDRhZWZkMTBmM2YxYzdmZTBlNDRkYjExHkTEaw==: 00:15:53.059 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:53.059 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:53.059 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:15:53.059 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.059 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.059 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.059 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:53.059 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:53.059 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:53.318 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:15:53.318 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:53.318 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:53.318 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:53.318 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:53.318 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:53.318 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:53.318 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.318 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.318 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.318 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:53.318 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:53.318 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:53.886 00:15:53.886 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:53.886 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:53.886 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:54.311 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:54.311 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:54.311 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.311 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.311 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.311 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:54.311 { 00:15:54.312 "cntlid": 29, 00:15:54.312 "qid": 0, 00:15:54.312 "state": "enabled", 00:15:54.312 "thread": "nvmf_tgt_poll_group_000", 00:15:54.312 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0", 00:15:54.312 "listen_address": { 00:15:54.312 "trtype": "TCP", 00:15:54.312 "adrfam": "IPv4", 00:15:54.312 "traddr": "10.0.0.3", 00:15:54.312 "trsvcid": "4420" 00:15:54.312 }, 00:15:54.312 "peer_address": { 00:15:54.312 "trtype": "TCP", 00:15:54.312 "adrfam": "IPv4", 00:15:54.312 "traddr": "10.0.0.1", 00:15:54.312 "trsvcid": "37020" 00:15:54.312 }, 00:15:54.312 "auth": { 00:15:54.312 "state": "completed", 00:15:54.312 "digest": "sha256", 00:15:54.312 "dhgroup": "ffdhe4096" 00:15:54.312 } 00:15:54.312 } 00:15:54.312 ]' 00:15:54.312 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:54.312 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:54.312 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:54.312 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:54.312 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:54.312 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:54.312 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:54.312 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:54.571 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTVjZjM5ZDQ2MWRmOGUzODllYjM2MzVlNDMwOTc1NjA0NzQ0OWFmOWIyZmZlYzlmWliPlg==: --dhchap-ctrl-secret DHHC-1:01:ZWEwMjQzYmM0ODJlZDU5OTBhYjZkOWNmMjIyN2ZkZTGcv5Bp: 00:15:54.571 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid 406d54d0-5e94-472a-a2b3-4291f3ac81e0 -l 0 --dhchap-secret DHHC-1:02:NTVjZjM5ZDQ2MWRmOGUzODllYjM2MzVlNDMwOTc1NjA0NzQ0OWFmOWIyZmZlYzlmWliPlg==: --dhchap-ctrl-secret DHHC-1:01:ZWEwMjQzYmM0ODJlZDU5OTBhYjZkOWNmMjIyN2ZkZTGcv5Bp: 00:15:55.138 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:55.138 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:55.138 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:15:55.138 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.138 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.138 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.138 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:55.138 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:55.138 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:55.397 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:15:55.397 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:55.397 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:55.397 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:55.397 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:55.397 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:55.397 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --dhchap-key key3 00:15:55.397 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.397 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.397 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.397 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:55.397 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:55.397 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:55.656 00:15:55.656 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:55.656 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:55.656 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:55.914 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.914 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:55.914 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.914 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.914 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.914 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:55.914 { 00:15:55.914 "cntlid": 31, 00:15:55.914 "qid": 0, 00:15:55.914 "state": "enabled", 00:15:55.914 "thread": "nvmf_tgt_poll_group_000", 00:15:55.914 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0", 00:15:55.914 "listen_address": { 00:15:55.914 "trtype": "TCP", 00:15:55.914 "adrfam": "IPv4", 00:15:55.914 "traddr": "10.0.0.3", 00:15:55.914 "trsvcid": "4420" 00:15:55.914 }, 00:15:55.914 "peer_address": { 00:15:55.914 "trtype": "TCP", 00:15:55.914 "adrfam": "IPv4", 00:15:55.914 "traddr": "10.0.0.1", 00:15:55.914 "trsvcid": "37054" 00:15:55.914 }, 00:15:55.914 "auth": { 00:15:55.914 "state": "completed", 00:15:55.914 "digest": "sha256", 00:15:55.914 "dhgroup": "ffdhe4096" 00:15:55.914 } 00:15:55.914 } 00:15:55.914 ]' 00:15:55.914 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:55.914 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:55.914 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:55.914 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:56.174 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:56.174 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:56.174 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:56.174 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:56.433 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjYwYjE4ODZiZGUzYmE2ZWI3NGMxZDE3NjI3YWRhYjhkMDVlM2Q4ZDNjYTI1MWVhZjkwNzY0YWM3ZmMyZmQzZImPBGY=: 00:15:56.433 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid 406d54d0-5e94-472a-a2b3-4291f3ac81e0 -l 0 --dhchap-secret DHHC-1:03:YjYwYjE4ODZiZGUzYmE2ZWI3NGMxZDE3NjI3YWRhYjhkMDVlM2Q4ZDNjYTI1MWVhZjkwNzY0YWM3ZmMyZmQzZImPBGY=: 00:15:57.000 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:57.000 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:57.000 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:15:57.000 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.000 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.000 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.000 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:57.000 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:57.000 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:57.000 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:57.258 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:15:57.258 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:57.258 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:57.258 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:57.258 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:57.258 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:57.258 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:57.258 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.258 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.258 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.258 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:57.258 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:57.258 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:57.516 00:15:57.516 14:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:57.516 14:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:57.516 14:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:57.775 14:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:57.775 14:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:57.775 14:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.775 14:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.775 14:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.775 14:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:57.775 { 00:15:57.775 "cntlid": 33, 00:15:57.775 "qid": 0, 00:15:57.775 "state": "enabled", 00:15:57.775 "thread": "nvmf_tgt_poll_group_000", 00:15:57.775 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0", 00:15:57.775 "listen_address": { 00:15:57.775 "trtype": "TCP", 00:15:57.775 "adrfam": "IPv4", 00:15:57.775 "traddr": "10.0.0.3", 00:15:57.775 "trsvcid": "4420" 00:15:57.775 }, 00:15:57.775 "peer_address": { 00:15:57.775 "trtype": "TCP", 00:15:57.775 "adrfam": "IPv4", 00:15:57.775 "traddr": "10.0.0.1", 00:15:57.775 "trsvcid": "37088" 00:15:57.775 }, 00:15:57.775 "auth": { 00:15:57.775 "state": "completed", 00:15:57.775 "digest": "sha256", 00:15:57.775 "dhgroup": "ffdhe6144" 00:15:57.775 } 00:15:57.775 } 00:15:57.775 ]' 00:15:57.775 14:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:58.034 14:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:58.034 14:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:58.034 14:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:58.034 14:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:58.034 14:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:58.034 14:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:58.034 14:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:58.292 14:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjBiNzUzNDk5NTJiMDI4YWY5YjAxNGE2M2JhOWY3MGM5NjdhNjYwN2QxZGEyYjVk1hdk9g==: --dhchap-ctrl-secret DHHC-1:03:MWUxNmY4Njk5NmQ3NzkwNWEyYTk0ZjM5Yzg0NDU1MDkzYTMyYTkzZGRhOGYzYThmZDk0NjM3YmI2MzU3ZGU2OKKEk9A=: 00:15:58.292 14:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid 406d54d0-5e94-472a-a2b3-4291f3ac81e0 -l 0 --dhchap-secret DHHC-1:00:MjBiNzUzNDk5NTJiMDI4YWY5YjAxNGE2M2JhOWY3MGM5NjdhNjYwN2QxZGEyYjVk1hdk9g==: --dhchap-ctrl-secret DHHC-1:03:MWUxNmY4Njk5NmQ3NzkwNWEyYTk0ZjM5Yzg0NDU1MDkzYTMyYTkzZGRhOGYzYThmZDk0NjM3YmI2MzU3ZGU2OKKEk9A=: 00:15:58.860 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:58.860 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:58.860 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:15:58.860 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.860 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.860 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.860 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:58.860 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:58.860 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:59.120 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:15:59.120 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:59.120 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:59.120 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:59.120 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:59.120 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:59.120 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:59.120 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.120 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.120 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.120 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:59.120 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:59.120 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:59.689 00:15:59.689 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:59.689 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:59.689 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:59.946 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:59.946 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:59.946 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.946 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.946 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.946 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:59.946 { 00:15:59.946 "cntlid": 35, 00:15:59.946 "qid": 0, 00:15:59.946 "state": "enabled", 00:15:59.946 "thread": "nvmf_tgt_poll_group_000", 00:15:59.946 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0", 00:15:59.946 "listen_address": { 00:15:59.946 "trtype": "TCP", 00:15:59.946 "adrfam": "IPv4", 00:15:59.946 "traddr": "10.0.0.3", 00:15:59.946 "trsvcid": "4420" 00:15:59.946 }, 00:15:59.946 "peer_address": { 00:15:59.946 "trtype": "TCP", 00:15:59.946 "adrfam": "IPv4", 00:15:59.946 "traddr": "10.0.0.1", 00:15:59.946 "trsvcid": "37106" 00:15:59.946 }, 00:15:59.946 "auth": { 00:15:59.946 "state": "completed", 00:15:59.946 "digest": "sha256", 00:15:59.946 "dhgroup": "ffdhe6144" 00:15:59.946 } 00:15:59.946 } 00:15:59.946 ]' 00:15:59.946 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:59.946 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:59.946 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:59.946 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:59.946 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:59.946 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:59.946 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:59.946 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:00.218 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTkwMmMwOGY3YTdkMGZjNWQ0ODJhMDk2ZGRmZjlmMjZ64U1I: --dhchap-ctrl-secret DHHC-1:02:MGE3OWQyMmNmNGIwMzA2NTgzYjhmMDM4NDRhZWZkMTBmM2YxYzdmZTBlNDRkYjExHkTEaw==: 00:16:00.218 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid 406d54d0-5e94-472a-a2b3-4291f3ac81e0 -l 0 --dhchap-secret DHHC-1:01:YTkwMmMwOGY3YTdkMGZjNWQ0ODJhMDk2ZGRmZjlmMjZ64U1I: --dhchap-ctrl-secret DHHC-1:02:MGE3OWQyMmNmNGIwMzA2NTgzYjhmMDM4NDRhZWZkMTBmM2YxYzdmZTBlNDRkYjExHkTEaw==: 00:16:00.787 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:00.787 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:00.787 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:16:00.787 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.787 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.787 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.787 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:00.787 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:00.787 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:01.046 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:16:01.046 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:01.046 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:01.046 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:01.046 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:01.046 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:01.046 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:01.046 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.046 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.046 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.046 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:01.046 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:01.046 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:01.615 00:16:01.615 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:01.615 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:01.615 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:01.615 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.615 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:01.615 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.615 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.874 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.874 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:01.874 { 00:16:01.874 "cntlid": 37, 00:16:01.874 "qid": 0, 00:16:01.874 "state": "enabled", 00:16:01.874 "thread": "nvmf_tgt_poll_group_000", 00:16:01.874 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0", 00:16:01.874 "listen_address": { 00:16:01.874 "trtype": "TCP", 00:16:01.874 "adrfam": "IPv4", 00:16:01.874 "traddr": "10.0.0.3", 00:16:01.874 "trsvcid": "4420" 00:16:01.874 }, 00:16:01.874 "peer_address": { 00:16:01.874 "trtype": "TCP", 00:16:01.874 "adrfam": "IPv4", 00:16:01.874 "traddr": "10.0.0.1", 00:16:01.874 "trsvcid": "37130" 00:16:01.874 }, 00:16:01.874 "auth": { 00:16:01.874 "state": "completed", 00:16:01.874 "digest": "sha256", 00:16:01.874 "dhgroup": "ffdhe6144" 00:16:01.874 } 00:16:01.874 } 00:16:01.874 ]' 00:16:01.874 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:01.874 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:01.874 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:01.874 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:01.874 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:01.874 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:01.874 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:01.874 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:02.137 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTVjZjM5ZDQ2MWRmOGUzODllYjM2MzVlNDMwOTc1NjA0NzQ0OWFmOWIyZmZlYzlmWliPlg==: --dhchap-ctrl-secret DHHC-1:01:ZWEwMjQzYmM0ODJlZDU5OTBhYjZkOWNmMjIyN2ZkZTGcv5Bp: 00:16:02.138 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid 406d54d0-5e94-472a-a2b3-4291f3ac81e0 -l 0 --dhchap-secret DHHC-1:02:NTVjZjM5ZDQ2MWRmOGUzODllYjM2MzVlNDMwOTc1NjA0NzQ0OWFmOWIyZmZlYzlmWliPlg==: --dhchap-ctrl-secret DHHC-1:01:ZWEwMjQzYmM0ODJlZDU5OTBhYjZkOWNmMjIyN2ZkZTGcv5Bp: 00:16:02.712 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:02.712 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:02.712 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:16:02.712 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.712 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.712 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.712 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:02.712 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:02.712 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:02.972 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:16:02.972 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:02.972 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:02.972 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:02.972 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:02.972 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:02.972 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --dhchap-key key3 00:16:02.972 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.972 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.972 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.972 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:02.972 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:02.972 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:03.231 00:16:03.231 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:03.231 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:03.231 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:03.490 14:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.490 14:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:03.490 14:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.490 14:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.490 14:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.490 14:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:03.490 { 00:16:03.490 "cntlid": 39, 00:16:03.490 "qid": 0, 00:16:03.490 "state": "enabled", 00:16:03.490 "thread": "nvmf_tgt_poll_group_000", 00:16:03.490 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0", 00:16:03.490 "listen_address": { 00:16:03.490 "trtype": "TCP", 00:16:03.490 "adrfam": "IPv4", 00:16:03.490 "traddr": "10.0.0.3", 00:16:03.490 "trsvcid": "4420" 00:16:03.490 }, 00:16:03.490 "peer_address": { 00:16:03.490 "trtype": "TCP", 00:16:03.490 "adrfam": "IPv4", 00:16:03.490 "traddr": "10.0.0.1", 00:16:03.490 "trsvcid": "37146" 00:16:03.490 }, 00:16:03.490 "auth": { 00:16:03.490 "state": "completed", 00:16:03.490 "digest": "sha256", 00:16:03.490 "dhgroup": "ffdhe6144" 00:16:03.490 } 00:16:03.490 } 00:16:03.490 ]' 00:16:03.490 14:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:03.490 14:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:03.490 14:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:03.490 14:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:03.490 14:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:03.749 14:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:03.749 14:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:03.749 14:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:04.008 14:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjYwYjE4ODZiZGUzYmE2ZWI3NGMxZDE3NjI3YWRhYjhkMDVlM2Q4ZDNjYTI1MWVhZjkwNzY0YWM3ZmMyZmQzZImPBGY=: 00:16:04.008 14:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid 406d54d0-5e94-472a-a2b3-4291f3ac81e0 -l 0 --dhchap-secret DHHC-1:03:YjYwYjE4ODZiZGUzYmE2ZWI3NGMxZDE3NjI3YWRhYjhkMDVlM2Q4ZDNjYTI1MWVhZjkwNzY0YWM3ZmMyZmQzZImPBGY=: 00:16:04.576 14:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:04.577 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:04.577 14:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:16:04.577 14:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.577 14:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.577 14:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.577 14:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:04.577 14:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:04.577 14:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:04.577 14:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:04.577 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:16:04.577 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:04.577 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:04.577 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:04.577 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:04.577 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:04.577 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:04.577 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.577 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.577 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.577 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:04.577 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:04.577 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:05.146 00:16:05.146 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:05.146 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:05.146 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:05.714 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.714 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:05.714 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.714 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.714 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.714 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:05.714 { 00:16:05.714 "cntlid": 41, 00:16:05.714 "qid": 0, 00:16:05.714 "state": "enabled", 00:16:05.714 "thread": "nvmf_tgt_poll_group_000", 00:16:05.714 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0", 00:16:05.714 "listen_address": { 00:16:05.714 "trtype": "TCP", 00:16:05.714 "adrfam": "IPv4", 00:16:05.714 "traddr": "10.0.0.3", 00:16:05.714 "trsvcid": "4420" 00:16:05.714 }, 00:16:05.714 "peer_address": { 00:16:05.714 "trtype": "TCP", 00:16:05.714 "adrfam": "IPv4", 00:16:05.714 "traddr": "10.0.0.1", 00:16:05.714 "trsvcid": "58104" 00:16:05.714 }, 00:16:05.714 "auth": { 00:16:05.714 "state": "completed", 00:16:05.714 "digest": "sha256", 00:16:05.714 "dhgroup": "ffdhe8192" 00:16:05.714 } 00:16:05.714 } 00:16:05.714 ]' 00:16:05.714 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:05.714 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:05.714 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:05.714 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:05.714 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:05.714 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:05.714 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:05.714 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:05.973 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjBiNzUzNDk5NTJiMDI4YWY5YjAxNGE2M2JhOWY3MGM5NjdhNjYwN2QxZGEyYjVk1hdk9g==: --dhchap-ctrl-secret DHHC-1:03:MWUxNmY4Njk5NmQ3NzkwNWEyYTk0ZjM5Yzg0NDU1MDkzYTMyYTkzZGRhOGYzYThmZDk0NjM3YmI2MzU3ZGU2OKKEk9A=: 00:16:05.973 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid 406d54d0-5e94-472a-a2b3-4291f3ac81e0 -l 0 --dhchap-secret DHHC-1:00:MjBiNzUzNDk5NTJiMDI4YWY5YjAxNGE2M2JhOWY3MGM5NjdhNjYwN2QxZGEyYjVk1hdk9g==: --dhchap-ctrl-secret DHHC-1:03:MWUxNmY4Njk5NmQ3NzkwNWEyYTk0ZjM5Yzg0NDU1MDkzYTMyYTkzZGRhOGYzYThmZDk0NjM3YmI2MzU3ZGU2OKKEk9A=: 00:16:06.580 14:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:06.580 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:06.580 14:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:16:06.580 14:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.580 14:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.580 14:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.580 14:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:06.580 14:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:06.580 14:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:06.840 14:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:16:06.840 14:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:06.840 14:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:06.840 14:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:06.840 14:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:06.840 14:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:06.840 14:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:06.840 14:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.840 14:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.840 14:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.840 14:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:06.840 14:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:06.840 14:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:07.407 00:16:07.407 14:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:07.407 14:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:07.407 14:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:07.666 14:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.666 14:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:07.666 14:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.666 14:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.666 14:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.666 14:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:07.666 { 00:16:07.666 "cntlid": 43, 00:16:07.666 "qid": 0, 00:16:07.666 "state": "enabled", 00:16:07.666 "thread": "nvmf_tgt_poll_group_000", 00:16:07.666 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0", 00:16:07.666 "listen_address": { 00:16:07.666 "trtype": "TCP", 00:16:07.666 "adrfam": "IPv4", 00:16:07.666 "traddr": "10.0.0.3", 00:16:07.666 "trsvcid": "4420" 00:16:07.666 }, 00:16:07.666 "peer_address": { 00:16:07.666 "trtype": "TCP", 00:16:07.666 "adrfam": "IPv4", 00:16:07.666 "traddr": "10.0.0.1", 00:16:07.666 "trsvcid": "58136" 00:16:07.666 }, 00:16:07.666 "auth": { 00:16:07.666 "state": "completed", 00:16:07.666 "digest": "sha256", 00:16:07.666 "dhgroup": "ffdhe8192" 00:16:07.666 } 00:16:07.666 } 00:16:07.666 ]' 00:16:07.666 14:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:07.666 14:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:07.666 14:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:07.666 14:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:07.666 14:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:07.925 14:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:07.925 14:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:07.925 14:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:07.925 14:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTkwMmMwOGY3YTdkMGZjNWQ0ODJhMDk2ZGRmZjlmMjZ64U1I: --dhchap-ctrl-secret DHHC-1:02:MGE3OWQyMmNmNGIwMzA2NTgzYjhmMDM4NDRhZWZkMTBmM2YxYzdmZTBlNDRkYjExHkTEaw==: 00:16:07.925 14:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid 406d54d0-5e94-472a-a2b3-4291f3ac81e0 -l 0 --dhchap-secret DHHC-1:01:YTkwMmMwOGY3YTdkMGZjNWQ0ODJhMDk2ZGRmZjlmMjZ64U1I: --dhchap-ctrl-secret DHHC-1:02:MGE3OWQyMmNmNGIwMzA2NTgzYjhmMDM4NDRhZWZkMTBmM2YxYzdmZTBlNDRkYjExHkTEaw==: 00:16:08.494 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:08.494 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:08.494 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:16:08.494 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.494 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.494 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.494 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:08.494 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:08.494 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:08.753 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:16:08.754 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:08.754 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:08.754 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:08.754 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:08.754 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:08.754 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:08.754 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.754 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.754 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.754 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:08.754 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:08.754 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:09.321 00:16:09.608 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:09.608 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:09.608 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:09.608 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.608 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:09.608 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.608 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.867 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.867 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:09.867 { 00:16:09.867 "cntlid": 45, 00:16:09.867 "qid": 0, 00:16:09.867 "state": "enabled", 00:16:09.867 "thread": "nvmf_tgt_poll_group_000", 00:16:09.867 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0", 00:16:09.867 "listen_address": { 00:16:09.867 "trtype": "TCP", 00:16:09.867 "adrfam": "IPv4", 00:16:09.867 "traddr": "10.0.0.3", 00:16:09.867 "trsvcid": "4420" 00:16:09.867 }, 00:16:09.867 "peer_address": { 00:16:09.867 "trtype": "TCP", 00:16:09.867 "adrfam": "IPv4", 00:16:09.867 "traddr": "10.0.0.1", 00:16:09.867 "trsvcid": "58154" 00:16:09.867 }, 00:16:09.867 "auth": { 00:16:09.867 "state": "completed", 00:16:09.867 "digest": "sha256", 00:16:09.867 "dhgroup": "ffdhe8192" 00:16:09.867 } 00:16:09.867 } 00:16:09.867 ]' 00:16:09.867 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:09.867 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:09.867 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:09.867 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:09.867 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:09.867 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:09.867 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:09.867 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:10.126 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTVjZjM5ZDQ2MWRmOGUzODllYjM2MzVlNDMwOTc1NjA0NzQ0OWFmOWIyZmZlYzlmWliPlg==: --dhchap-ctrl-secret DHHC-1:01:ZWEwMjQzYmM0ODJlZDU5OTBhYjZkOWNmMjIyN2ZkZTGcv5Bp: 00:16:10.126 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid 406d54d0-5e94-472a-a2b3-4291f3ac81e0 -l 0 --dhchap-secret DHHC-1:02:NTVjZjM5ZDQ2MWRmOGUzODllYjM2MzVlNDMwOTc1NjA0NzQ0OWFmOWIyZmZlYzlmWliPlg==: --dhchap-ctrl-secret DHHC-1:01:ZWEwMjQzYmM0ODJlZDU5OTBhYjZkOWNmMjIyN2ZkZTGcv5Bp: 00:16:10.694 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:10.694 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:10.694 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:16:10.694 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.694 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.694 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.694 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:10.694 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:10.694 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:10.954 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:16:10.954 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:10.954 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:10.954 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:10.954 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:10.954 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:10.954 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --dhchap-key key3 00:16:10.954 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.954 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.954 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.954 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:10.954 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:10.954 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:11.522 00:16:11.522 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:11.522 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:11.522 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:11.782 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.782 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:11.782 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.782 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.782 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.782 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:11.782 { 00:16:11.782 "cntlid": 47, 00:16:11.782 "qid": 0, 00:16:11.782 "state": "enabled", 00:16:11.782 "thread": "nvmf_tgt_poll_group_000", 00:16:11.782 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0", 00:16:11.782 "listen_address": { 00:16:11.782 "trtype": "TCP", 00:16:11.782 "adrfam": "IPv4", 00:16:11.782 "traddr": "10.0.0.3", 00:16:11.782 "trsvcid": "4420" 00:16:11.782 }, 00:16:11.782 "peer_address": { 00:16:11.782 "trtype": "TCP", 00:16:11.782 "adrfam": "IPv4", 00:16:11.782 "traddr": "10.0.0.1", 00:16:11.782 "trsvcid": "58182" 00:16:11.782 }, 00:16:11.782 "auth": { 00:16:11.782 "state": "completed", 00:16:11.782 "digest": "sha256", 00:16:11.782 "dhgroup": "ffdhe8192" 00:16:11.782 } 00:16:11.782 } 00:16:11.782 ]' 00:16:11.782 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:11.782 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:11.782 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:11.782 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:11.782 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:12.042 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.042 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.042 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.042 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjYwYjE4ODZiZGUzYmE2ZWI3NGMxZDE3NjI3YWRhYjhkMDVlM2Q4ZDNjYTI1MWVhZjkwNzY0YWM3ZmMyZmQzZImPBGY=: 00:16:12.042 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid 406d54d0-5e94-472a-a2b3-4291f3ac81e0 -l 0 --dhchap-secret DHHC-1:03:YjYwYjE4ODZiZGUzYmE2ZWI3NGMxZDE3NjI3YWRhYjhkMDVlM2Q4ZDNjYTI1MWVhZjkwNzY0YWM3ZmMyZmQzZImPBGY=: 00:16:12.610 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:12.610 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:12.610 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:16:12.610 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.610 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.610 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.610 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:12.610 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:12.610 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:12.610 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:12.610 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:12.869 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:16:12.869 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:12.869 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:12.869 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:12.869 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:12.869 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:12.869 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:12.869 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.869 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.869 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.869 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:12.869 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:12.869 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:13.132 00:16:13.390 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:13.390 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:13.390 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:13.390 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.390 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:13.390 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.390 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.390 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.390 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:13.390 { 00:16:13.390 "cntlid": 49, 00:16:13.390 "qid": 0, 00:16:13.390 "state": "enabled", 00:16:13.390 "thread": "nvmf_tgt_poll_group_000", 00:16:13.390 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0", 00:16:13.390 "listen_address": { 00:16:13.390 "trtype": "TCP", 00:16:13.390 "adrfam": "IPv4", 00:16:13.390 "traddr": "10.0.0.3", 00:16:13.390 "trsvcid": "4420" 00:16:13.390 }, 00:16:13.390 "peer_address": { 00:16:13.390 "trtype": "TCP", 00:16:13.390 "adrfam": "IPv4", 00:16:13.390 "traddr": "10.0.0.1", 00:16:13.390 "trsvcid": "48130" 00:16:13.390 }, 00:16:13.390 "auth": { 00:16:13.390 "state": "completed", 00:16:13.390 "digest": "sha384", 00:16:13.390 "dhgroup": "null" 00:16:13.390 } 00:16:13.390 } 00:16:13.390 ]' 00:16:13.390 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:13.650 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:13.650 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:13.650 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:13.650 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:13.650 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:13.650 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:13.650 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:13.909 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjBiNzUzNDk5NTJiMDI4YWY5YjAxNGE2M2JhOWY3MGM5NjdhNjYwN2QxZGEyYjVk1hdk9g==: --dhchap-ctrl-secret DHHC-1:03:MWUxNmY4Njk5NmQ3NzkwNWEyYTk0ZjM5Yzg0NDU1MDkzYTMyYTkzZGRhOGYzYThmZDk0NjM3YmI2MzU3ZGU2OKKEk9A=: 00:16:13.909 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid 406d54d0-5e94-472a-a2b3-4291f3ac81e0 -l 0 --dhchap-secret DHHC-1:00:MjBiNzUzNDk5NTJiMDI4YWY5YjAxNGE2M2JhOWY3MGM5NjdhNjYwN2QxZGEyYjVk1hdk9g==: --dhchap-ctrl-secret DHHC-1:03:MWUxNmY4Njk5NmQ3NzkwNWEyYTk0ZjM5Yzg0NDU1MDkzYTMyYTkzZGRhOGYzYThmZDk0NjM3YmI2MzU3ZGU2OKKEk9A=: 00:16:14.475 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:14.475 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:14.475 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:16:14.475 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.475 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.475 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.475 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:14.475 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:14.475 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:14.734 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:16:14.734 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:14.734 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:14.734 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:14.734 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:14.734 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:14.734 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:14.734 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.734 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.734 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.734 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:14.734 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:14.734 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:14.992 00:16:14.992 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:14.992 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:14.992 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:15.251 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.251 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:15.251 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.251 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.251 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.251 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:15.251 { 00:16:15.251 "cntlid": 51, 00:16:15.251 "qid": 0, 00:16:15.251 "state": "enabled", 00:16:15.251 "thread": "nvmf_tgt_poll_group_000", 00:16:15.251 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0", 00:16:15.251 "listen_address": { 00:16:15.251 "trtype": "TCP", 00:16:15.251 "adrfam": "IPv4", 00:16:15.251 "traddr": "10.0.0.3", 00:16:15.251 "trsvcid": "4420" 00:16:15.251 }, 00:16:15.251 "peer_address": { 00:16:15.251 "trtype": "TCP", 00:16:15.251 "adrfam": "IPv4", 00:16:15.251 "traddr": "10.0.0.1", 00:16:15.251 "trsvcid": "48172" 00:16:15.251 }, 00:16:15.251 "auth": { 00:16:15.251 "state": "completed", 00:16:15.251 "digest": "sha384", 00:16:15.251 "dhgroup": "null" 00:16:15.251 } 00:16:15.251 } 00:16:15.251 ]' 00:16:15.251 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:15.251 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:15.251 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:15.251 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:15.251 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:15.251 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:15.251 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:15.251 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:15.510 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTkwMmMwOGY3YTdkMGZjNWQ0ODJhMDk2ZGRmZjlmMjZ64U1I: --dhchap-ctrl-secret DHHC-1:02:MGE3OWQyMmNmNGIwMzA2NTgzYjhmMDM4NDRhZWZkMTBmM2YxYzdmZTBlNDRkYjExHkTEaw==: 00:16:15.510 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid 406d54d0-5e94-472a-a2b3-4291f3ac81e0 -l 0 --dhchap-secret DHHC-1:01:YTkwMmMwOGY3YTdkMGZjNWQ0ODJhMDk2ZGRmZjlmMjZ64U1I: --dhchap-ctrl-secret DHHC-1:02:MGE3OWQyMmNmNGIwMzA2NTgzYjhmMDM4NDRhZWZkMTBmM2YxYzdmZTBlNDRkYjExHkTEaw==: 00:16:16.076 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.076 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.077 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:16:16.077 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.077 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.077 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.077 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:16.077 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:16.077 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:16.335 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:16:16.335 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:16.335 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:16.335 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:16.335 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:16.335 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:16.335 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:16.335 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.335 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.335 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.335 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:16.335 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:16.335 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:16.902 00:16:16.902 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:16.902 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:16.902 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:17.161 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.161 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:17.161 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.161 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.161 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.161 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:17.161 { 00:16:17.161 "cntlid": 53, 00:16:17.161 "qid": 0, 00:16:17.161 "state": "enabled", 00:16:17.161 "thread": "nvmf_tgt_poll_group_000", 00:16:17.161 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0", 00:16:17.161 "listen_address": { 00:16:17.161 "trtype": "TCP", 00:16:17.161 "adrfam": "IPv4", 00:16:17.161 "traddr": "10.0.0.3", 00:16:17.161 "trsvcid": "4420" 00:16:17.161 }, 00:16:17.161 "peer_address": { 00:16:17.161 "trtype": "TCP", 00:16:17.161 "adrfam": "IPv4", 00:16:17.161 "traddr": "10.0.0.1", 00:16:17.161 "trsvcid": "48188" 00:16:17.161 }, 00:16:17.161 "auth": { 00:16:17.161 "state": "completed", 00:16:17.161 "digest": "sha384", 00:16:17.161 "dhgroup": "null" 00:16:17.161 } 00:16:17.161 } 00:16:17.161 ]' 00:16:17.161 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:17.161 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:17.161 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:17.161 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:17.161 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:17.161 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:17.161 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:17.161 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:17.420 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTVjZjM5ZDQ2MWRmOGUzODllYjM2MzVlNDMwOTc1NjA0NzQ0OWFmOWIyZmZlYzlmWliPlg==: --dhchap-ctrl-secret DHHC-1:01:ZWEwMjQzYmM0ODJlZDU5OTBhYjZkOWNmMjIyN2ZkZTGcv5Bp: 00:16:17.420 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid 406d54d0-5e94-472a-a2b3-4291f3ac81e0 -l 0 --dhchap-secret DHHC-1:02:NTVjZjM5ZDQ2MWRmOGUzODllYjM2MzVlNDMwOTc1NjA0NzQ0OWFmOWIyZmZlYzlmWliPlg==: --dhchap-ctrl-secret DHHC-1:01:ZWEwMjQzYmM0ODJlZDU5OTBhYjZkOWNmMjIyN2ZkZTGcv5Bp: 00:16:17.988 14:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:17.988 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:17.988 14:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:16:17.988 14:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.988 14:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.988 14:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.988 14:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:17.988 14:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:17.988 14:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:18.247 14:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:16:18.247 14:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:18.247 14:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:18.247 14:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:18.247 14:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:18.247 14:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:18.247 14:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --dhchap-key key3 00:16:18.247 14:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.247 14:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.247 14:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.247 14:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:18.247 14:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:18.247 14:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:18.506 00:16:18.506 14:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:18.506 14:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:18.506 14:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:18.765 14:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.765 14:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:18.765 14:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.765 14:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.765 14:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.765 14:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:18.765 { 00:16:18.765 "cntlid": 55, 00:16:18.765 "qid": 0, 00:16:18.765 "state": "enabled", 00:16:18.765 "thread": "nvmf_tgt_poll_group_000", 00:16:18.765 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0", 00:16:18.765 "listen_address": { 00:16:18.765 "trtype": "TCP", 00:16:18.765 "adrfam": "IPv4", 00:16:18.765 "traddr": "10.0.0.3", 00:16:18.765 "trsvcid": "4420" 00:16:18.765 }, 00:16:18.765 "peer_address": { 00:16:18.765 "trtype": "TCP", 00:16:18.765 "adrfam": "IPv4", 00:16:18.765 "traddr": "10.0.0.1", 00:16:18.765 "trsvcid": "48228" 00:16:18.765 }, 00:16:18.765 "auth": { 00:16:18.765 "state": "completed", 00:16:18.765 "digest": "sha384", 00:16:18.765 "dhgroup": "null" 00:16:18.765 } 00:16:18.765 } 00:16:18.765 ]' 00:16:18.765 14:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:18.765 14:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:18.765 14:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:18.765 14:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:18.765 14:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:18.765 14:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:18.765 14:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:18.765 14:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:19.024 14:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjYwYjE4ODZiZGUzYmE2ZWI3NGMxZDE3NjI3YWRhYjhkMDVlM2Q4ZDNjYTI1MWVhZjkwNzY0YWM3ZmMyZmQzZImPBGY=: 00:16:19.025 14:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid 406d54d0-5e94-472a-a2b3-4291f3ac81e0 -l 0 --dhchap-secret DHHC-1:03:YjYwYjE4ODZiZGUzYmE2ZWI3NGMxZDE3NjI3YWRhYjhkMDVlM2Q4ZDNjYTI1MWVhZjkwNzY0YWM3ZmMyZmQzZImPBGY=: 00:16:19.593 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:19.593 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:19.593 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:16:19.593 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.593 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.593 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.593 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:19.593 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:19.593 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:19.593 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:19.852 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:16:19.852 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:19.852 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:19.852 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:19.852 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:19.852 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:19.852 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:19.852 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.852 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.852 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.852 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:19.852 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:19.852 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:20.113 00:16:20.372 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:20.372 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:20.372 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:20.372 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.372 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:20.372 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.372 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.372 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.631 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:20.631 { 00:16:20.631 "cntlid": 57, 00:16:20.631 "qid": 0, 00:16:20.631 "state": "enabled", 00:16:20.631 "thread": "nvmf_tgt_poll_group_000", 00:16:20.631 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0", 00:16:20.631 "listen_address": { 00:16:20.631 "trtype": "TCP", 00:16:20.631 "adrfam": "IPv4", 00:16:20.631 "traddr": "10.0.0.3", 00:16:20.631 "trsvcid": "4420" 00:16:20.631 }, 00:16:20.631 "peer_address": { 00:16:20.631 "trtype": "TCP", 00:16:20.631 "adrfam": "IPv4", 00:16:20.631 "traddr": "10.0.0.1", 00:16:20.631 "trsvcid": "48262" 00:16:20.631 }, 00:16:20.631 "auth": { 00:16:20.631 "state": "completed", 00:16:20.631 "digest": "sha384", 00:16:20.631 "dhgroup": "ffdhe2048" 00:16:20.631 } 00:16:20.631 } 00:16:20.631 ]' 00:16:20.631 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:20.631 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:20.631 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:20.631 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:20.631 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:20.631 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:20.631 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:20.631 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:20.890 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjBiNzUzNDk5NTJiMDI4YWY5YjAxNGE2M2JhOWY3MGM5NjdhNjYwN2QxZGEyYjVk1hdk9g==: --dhchap-ctrl-secret DHHC-1:03:MWUxNmY4Njk5NmQ3NzkwNWEyYTk0ZjM5Yzg0NDU1MDkzYTMyYTkzZGRhOGYzYThmZDk0NjM3YmI2MzU3ZGU2OKKEk9A=: 00:16:20.890 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid 406d54d0-5e94-472a-a2b3-4291f3ac81e0 -l 0 --dhchap-secret DHHC-1:00:MjBiNzUzNDk5NTJiMDI4YWY5YjAxNGE2M2JhOWY3MGM5NjdhNjYwN2QxZGEyYjVk1hdk9g==: --dhchap-ctrl-secret DHHC-1:03:MWUxNmY4Njk5NmQ3NzkwNWEyYTk0ZjM5Yzg0NDU1MDkzYTMyYTkzZGRhOGYzYThmZDk0NjM3YmI2MzU3ZGU2OKKEk9A=: 00:16:21.459 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:21.459 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:21.459 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:16:21.459 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.459 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.459 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.459 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:21.459 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:21.459 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:21.717 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:16:21.717 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:21.717 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:21.717 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:21.717 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:21.717 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:21.717 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.717 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.717 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.718 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.718 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.718 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.718 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:22.286 00:16:22.286 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:22.286 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:22.286 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:22.286 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.286 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:22.286 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.286 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.545 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.545 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:22.545 { 00:16:22.545 "cntlid": 59, 00:16:22.545 "qid": 0, 00:16:22.545 "state": "enabled", 00:16:22.545 "thread": "nvmf_tgt_poll_group_000", 00:16:22.545 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0", 00:16:22.545 "listen_address": { 00:16:22.545 "trtype": "TCP", 00:16:22.545 "adrfam": "IPv4", 00:16:22.545 "traddr": "10.0.0.3", 00:16:22.545 "trsvcid": "4420" 00:16:22.545 }, 00:16:22.545 "peer_address": { 00:16:22.545 "trtype": "TCP", 00:16:22.545 "adrfam": "IPv4", 00:16:22.545 "traddr": "10.0.0.1", 00:16:22.545 "trsvcid": "48288" 00:16:22.545 }, 00:16:22.545 "auth": { 00:16:22.545 "state": "completed", 00:16:22.545 "digest": "sha384", 00:16:22.545 "dhgroup": "ffdhe2048" 00:16:22.545 } 00:16:22.545 } 00:16:22.545 ]' 00:16:22.545 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:22.546 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:22.546 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:22.546 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:22.546 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:22.546 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:22.546 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:22.546 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.805 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTkwMmMwOGY3YTdkMGZjNWQ0ODJhMDk2ZGRmZjlmMjZ64U1I: --dhchap-ctrl-secret DHHC-1:02:MGE3OWQyMmNmNGIwMzA2NTgzYjhmMDM4NDRhZWZkMTBmM2YxYzdmZTBlNDRkYjExHkTEaw==: 00:16:22.805 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid 406d54d0-5e94-472a-a2b3-4291f3ac81e0 -l 0 --dhchap-secret DHHC-1:01:YTkwMmMwOGY3YTdkMGZjNWQ0ODJhMDk2ZGRmZjlmMjZ64U1I: --dhchap-ctrl-secret DHHC-1:02:MGE3OWQyMmNmNGIwMzA2NTgzYjhmMDM4NDRhZWZkMTBmM2YxYzdmZTBlNDRkYjExHkTEaw==: 00:16:23.388 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:23.388 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:23.388 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:16:23.388 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.388 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.388 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.388 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:23.388 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:23.389 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:23.648 14:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:16:23.648 14:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:23.648 14:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:23.648 14:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:23.648 14:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:23.648 14:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:23.648 14:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.648 14:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.648 14:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.648 14:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.648 14:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.648 14:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.648 14:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.908 00:16:23.908 14:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:23.908 14:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:23.908 14:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:24.168 14:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.168 14:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:24.168 14:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.168 14:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.168 14:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.168 14:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:24.168 { 00:16:24.168 "cntlid": 61, 00:16:24.168 "qid": 0, 00:16:24.168 "state": "enabled", 00:16:24.168 "thread": "nvmf_tgt_poll_group_000", 00:16:24.168 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0", 00:16:24.168 "listen_address": { 00:16:24.168 "trtype": "TCP", 00:16:24.168 "adrfam": "IPv4", 00:16:24.168 "traddr": "10.0.0.3", 00:16:24.168 "trsvcid": "4420" 00:16:24.168 }, 00:16:24.168 "peer_address": { 00:16:24.168 "trtype": "TCP", 00:16:24.168 "adrfam": "IPv4", 00:16:24.168 "traddr": "10.0.0.1", 00:16:24.168 "trsvcid": "34868" 00:16:24.168 }, 00:16:24.168 "auth": { 00:16:24.168 "state": "completed", 00:16:24.168 "digest": "sha384", 00:16:24.168 "dhgroup": "ffdhe2048" 00:16:24.168 } 00:16:24.168 } 00:16:24.168 ]' 00:16:24.168 14:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:24.168 14:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:24.168 14:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:24.428 14:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:24.428 14:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:24.428 14:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:24.428 14:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:24.428 14:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:24.687 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTVjZjM5ZDQ2MWRmOGUzODllYjM2MzVlNDMwOTc1NjA0NzQ0OWFmOWIyZmZlYzlmWliPlg==: --dhchap-ctrl-secret DHHC-1:01:ZWEwMjQzYmM0ODJlZDU5OTBhYjZkOWNmMjIyN2ZkZTGcv5Bp: 00:16:24.687 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid 406d54d0-5e94-472a-a2b3-4291f3ac81e0 -l 0 --dhchap-secret DHHC-1:02:NTVjZjM5ZDQ2MWRmOGUzODllYjM2MzVlNDMwOTc1NjA0NzQ0OWFmOWIyZmZlYzlmWliPlg==: --dhchap-ctrl-secret DHHC-1:01:ZWEwMjQzYmM0ODJlZDU5OTBhYjZkOWNmMjIyN2ZkZTGcv5Bp: 00:16:25.255 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:25.255 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:25.255 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:16:25.255 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.255 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.255 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.255 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:25.255 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:25.255 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:25.514 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:16:25.514 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:25.514 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:25.514 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:25.514 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:25.514 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:25.514 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --dhchap-key key3 00:16:25.514 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.514 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.514 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.514 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:25.514 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:25.514 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:25.773 00:16:25.773 14:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:25.773 14:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:25.773 14:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.033 14:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.033 14:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.033 14:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.033 14:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.033 14:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.033 14:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:26.033 { 00:16:26.033 "cntlid": 63, 00:16:26.033 "qid": 0, 00:16:26.033 "state": "enabled", 00:16:26.033 "thread": "nvmf_tgt_poll_group_000", 00:16:26.033 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0", 00:16:26.033 "listen_address": { 00:16:26.033 "trtype": "TCP", 00:16:26.033 "adrfam": "IPv4", 00:16:26.033 "traddr": "10.0.0.3", 00:16:26.033 "trsvcid": "4420" 00:16:26.033 }, 00:16:26.033 "peer_address": { 00:16:26.033 "trtype": "TCP", 00:16:26.033 "adrfam": "IPv4", 00:16:26.033 "traddr": "10.0.0.1", 00:16:26.033 "trsvcid": "34890" 00:16:26.033 }, 00:16:26.033 "auth": { 00:16:26.033 "state": "completed", 00:16:26.033 "digest": "sha384", 00:16:26.033 "dhgroup": "ffdhe2048" 00:16:26.033 } 00:16:26.033 } 00:16:26.033 ]' 00:16:26.033 14:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:26.033 14:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:26.033 14:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:26.033 14:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:26.033 14:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:26.033 14:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:26.033 14:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.033 14:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:26.292 14:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjYwYjE4ODZiZGUzYmE2ZWI3NGMxZDE3NjI3YWRhYjhkMDVlM2Q4ZDNjYTI1MWVhZjkwNzY0YWM3ZmMyZmQzZImPBGY=: 00:16:26.292 14:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid 406d54d0-5e94-472a-a2b3-4291f3ac81e0 -l 0 --dhchap-secret DHHC-1:03:YjYwYjE4ODZiZGUzYmE2ZWI3NGMxZDE3NjI3YWRhYjhkMDVlM2Q4ZDNjYTI1MWVhZjkwNzY0YWM3ZmMyZmQzZImPBGY=: 00:16:26.861 14:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.861 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.861 14:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:16:26.861 14:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.861 14:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.861 14:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.861 14:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:26.861 14:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:26.861 14:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:26.861 14:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:27.120 14:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:16:27.120 14:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:27.120 14:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:27.120 14:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:27.120 14:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:27.120 14:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:27.120 14:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:27.120 14:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.120 14:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.120 14:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.120 14:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:27.120 14:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:27.120 14:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:27.380 00:16:27.380 14:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:27.380 14:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:27.380 14:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:27.640 14:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.640 14:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:27.640 14:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.641 14:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.641 14:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.641 14:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:27.641 { 00:16:27.641 "cntlid": 65, 00:16:27.641 "qid": 0, 00:16:27.641 "state": "enabled", 00:16:27.641 "thread": "nvmf_tgt_poll_group_000", 00:16:27.641 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0", 00:16:27.641 "listen_address": { 00:16:27.641 "trtype": "TCP", 00:16:27.641 "adrfam": "IPv4", 00:16:27.641 "traddr": "10.0.0.3", 00:16:27.641 "trsvcid": "4420" 00:16:27.641 }, 00:16:27.641 "peer_address": { 00:16:27.641 "trtype": "TCP", 00:16:27.641 "adrfam": "IPv4", 00:16:27.641 "traddr": "10.0.0.1", 00:16:27.641 "trsvcid": "34920" 00:16:27.641 }, 00:16:27.641 "auth": { 00:16:27.641 "state": "completed", 00:16:27.641 "digest": "sha384", 00:16:27.641 "dhgroup": "ffdhe3072" 00:16:27.641 } 00:16:27.641 } 00:16:27.641 ]' 00:16:27.641 14:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:27.641 14:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:27.641 14:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:27.900 14:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:27.900 14:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:27.900 14:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:27.900 14:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:27.900 14:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.159 14:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjBiNzUzNDk5NTJiMDI4YWY5YjAxNGE2M2JhOWY3MGM5NjdhNjYwN2QxZGEyYjVk1hdk9g==: --dhchap-ctrl-secret DHHC-1:03:MWUxNmY4Njk5NmQ3NzkwNWEyYTk0ZjM5Yzg0NDU1MDkzYTMyYTkzZGRhOGYzYThmZDk0NjM3YmI2MzU3ZGU2OKKEk9A=: 00:16:28.159 14:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid 406d54d0-5e94-472a-a2b3-4291f3ac81e0 -l 0 --dhchap-secret DHHC-1:00:MjBiNzUzNDk5NTJiMDI4YWY5YjAxNGE2M2JhOWY3MGM5NjdhNjYwN2QxZGEyYjVk1hdk9g==: --dhchap-ctrl-secret DHHC-1:03:MWUxNmY4Njk5NmQ3NzkwNWEyYTk0ZjM5Yzg0NDU1MDkzYTMyYTkzZGRhOGYzYThmZDk0NjM3YmI2MzU3ZGU2OKKEk9A=: 00:16:28.727 14:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:28.727 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:28.727 14:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:16:28.727 14:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.727 14:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.727 14:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.727 14:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:28.727 14:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:28.727 14:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:28.986 14:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:16:28.986 14:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:28.986 14:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:28.986 14:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:28.986 14:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:28.986 14:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:28.986 14:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.986 14:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.986 14:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.986 14:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.986 14:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.986 14:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.986 14:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:29.245 00:16:29.245 14:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:29.245 14:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:29.245 14:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:29.504 14:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.504 14:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:29.504 14:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.504 14:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.504 14:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.504 14:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:29.504 { 00:16:29.504 "cntlid": 67, 00:16:29.504 "qid": 0, 00:16:29.504 "state": "enabled", 00:16:29.504 "thread": "nvmf_tgt_poll_group_000", 00:16:29.504 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0", 00:16:29.504 "listen_address": { 00:16:29.504 "trtype": "TCP", 00:16:29.504 "adrfam": "IPv4", 00:16:29.504 "traddr": "10.0.0.3", 00:16:29.504 "trsvcid": "4420" 00:16:29.504 }, 00:16:29.504 "peer_address": { 00:16:29.504 "trtype": "TCP", 00:16:29.504 "adrfam": "IPv4", 00:16:29.504 "traddr": "10.0.0.1", 00:16:29.504 "trsvcid": "34952" 00:16:29.504 }, 00:16:29.504 "auth": { 00:16:29.504 "state": "completed", 00:16:29.504 "digest": "sha384", 00:16:29.504 "dhgroup": "ffdhe3072" 00:16:29.504 } 00:16:29.504 } 00:16:29.504 ]' 00:16:29.504 14:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:29.504 14:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:29.504 14:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:29.504 14:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:29.504 14:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:29.504 14:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:29.504 14:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:29.504 14:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:29.764 14:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTkwMmMwOGY3YTdkMGZjNWQ0ODJhMDk2ZGRmZjlmMjZ64U1I: --dhchap-ctrl-secret DHHC-1:02:MGE3OWQyMmNmNGIwMzA2NTgzYjhmMDM4NDRhZWZkMTBmM2YxYzdmZTBlNDRkYjExHkTEaw==: 00:16:29.764 14:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid 406d54d0-5e94-472a-a2b3-4291f3ac81e0 -l 0 --dhchap-secret DHHC-1:01:YTkwMmMwOGY3YTdkMGZjNWQ0ODJhMDk2ZGRmZjlmMjZ64U1I: --dhchap-ctrl-secret DHHC-1:02:MGE3OWQyMmNmNGIwMzA2NTgzYjhmMDM4NDRhZWZkMTBmM2YxYzdmZTBlNDRkYjExHkTEaw==: 00:16:30.332 14:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:30.332 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:30.332 14:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:16:30.332 14:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.332 14:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.333 14:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.333 14:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:30.333 14:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:30.333 14:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:30.591 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:16:30.591 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:30.591 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:30.592 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:30.592 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:30.592 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:30.592 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:30.592 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.592 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.592 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.592 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:30.592 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:30.592 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:30.850 00:16:30.850 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:30.850 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:30.850 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.108 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.108 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.108 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.108 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.108 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.108 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:31.108 { 00:16:31.108 "cntlid": 69, 00:16:31.108 "qid": 0, 00:16:31.108 "state": "enabled", 00:16:31.108 "thread": "nvmf_tgt_poll_group_000", 00:16:31.108 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0", 00:16:31.108 "listen_address": { 00:16:31.108 "trtype": "TCP", 00:16:31.108 "adrfam": "IPv4", 00:16:31.108 "traddr": "10.0.0.3", 00:16:31.108 "trsvcid": "4420" 00:16:31.108 }, 00:16:31.108 "peer_address": { 00:16:31.108 "trtype": "TCP", 00:16:31.108 "adrfam": "IPv4", 00:16:31.108 "traddr": "10.0.0.1", 00:16:31.108 "trsvcid": "34972" 00:16:31.108 }, 00:16:31.108 "auth": { 00:16:31.108 "state": "completed", 00:16:31.108 "digest": "sha384", 00:16:31.108 "dhgroup": "ffdhe3072" 00:16:31.108 } 00:16:31.108 } 00:16:31.108 ]' 00:16:31.108 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:31.108 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:31.108 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:31.108 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:31.108 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:31.368 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.368 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.368 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:31.368 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTVjZjM5ZDQ2MWRmOGUzODllYjM2MzVlNDMwOTc1NjA0NzQ0OWFmOWIyZmZlYzlmWliPlg==: --dhchap-ctrl-secret DHHC-1:01:ZWEwMjQzYmM0ODJlZDU5OTBhYjZkOWNmMjIyN2ZkZTGcv5Bp: 00:16:31.368 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid 406d54d0-5e94-472a-a2b3-4291f3ac81e0 -l 0 --dhchap-secret DHHC-1:02:NTVjZjM5ZDQ2MWRmOGUzODllYjM2MzVlNDMwOTc1NjA0NzQ0OWFmOWIyZmZlYzlmWliPlg==: --dhchap-ctrl-secret DHHC-1:01:ZWEwMjQzYmM0ODJlZDU5OTBhYjZkOWNmMjIyN2ZkZTGcv5Bp: 00:16:31.937 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.198 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.198 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:16:32.198 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.198 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.198 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.198 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:32.198 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:32.198 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:32.198 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:16:32.198 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:32.199 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:32.199 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:32.199 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:32.199 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:32.199 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --dhchap-key key3 00:16:32.199 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.199 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.199 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.199 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:32.199 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:32.199 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:32.768 00:16:32.768 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:32.768 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:32.768 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:32.768 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.768 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:32.768 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.768 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.768 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.768 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:32.768 { 00:16:32.768 "cntlid": 71, 00:16:32.768 "qid": 0, 00:16:32.768 "state": "enabled", 00:16:32.768 "thread": "nvmf_tgt_poll_group_000", 00:16:32.768 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0", 00:16:32.768 "listen_address": { 00:16:32.768 "trtype": "TCP", 00:16:32.768 "adrfam": "IPv4", 00:16:32.768 "traddr": "10.0.0.3", 00:16:32.768 "trsvcid": "4420" 00:16:32.768 }, 00:16:32.768 "peer_address": { 00:16:32.768 "trtype": "TCP", 00:16:32.768 "adrfam": "IPv4", 00:16:32.768 "traddr": "10.0.0.1", 00:16:32.768 "trsvcid": "35006" 00:16:32.768 }, 00:16:32.768 "auth": { 00:16:32.768 "state": "completed", 00:16:32.768 "digest": "sha384", 00:16:32.768 "dhgroup": "ffdhe3072" 00:16:32.768 } 00:16:32.768 } 00:16:32.768 ]' 00:16:32.768 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:33.027 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:33.027 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:33.027 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:33.027 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:33.027 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.027 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.027 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.285 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjYwYjE4ODZiZGUzYmE2ZWI3NGMxZDE3NjI3YWRhYjhkMDVlM2Q4ZDNjYTI1MWVhZjkwNzY0YWM3ZmMyZmQzZImPBGY=: 00:16:33.285 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid 406d54d0-5e94-472a-a2b3-4291f3ac81e0 -l 0 --dhchap-secret DHHC-1:03:YjYwYjE4ODZiZGUzYmE2ZWI3NGMxZDE3NjI3YWRhYjhkMDVlM2Q4ZDNjYTI1MWVhZjkwNzY0YWM3ZmMyZmQzZImPBGY=: 00:16:33.854 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.854 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.854 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:16:33.854 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.854 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.854 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.854 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:33.854 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:33.854 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:33.854 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:34.113 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:16:34.113 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:34.113 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:34.113 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:34.113 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:34.113 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:34.113 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:34.113 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.113 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.113 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.113 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:34.113 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:34.113 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:34.372 00:16:34.372 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:34.372 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:34.372 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.632 14:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.633 14:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.633 14:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.633 14:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.633 14:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.633 14:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:34.633 { 00:16:34.633 "cntlid": 73, 00:16:34.633 "qid": 0, 00:16:34.633 "state": "enabled", 00:16:34.633 "thread": "nvmf_tgt_poll_group_000", 00:16:34.633 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0", 00:16:34.633 "listen_address": { 00:16:34.633 "trtype": "TCP", 00:16:34.633 "adrfam": "IPv4", 00:16:34.633 "traddr": "10.0.0.3", 00:16:34.633 "trsvcid": "4420" 00:16:34.633 }, 00:16:34.633 "peer_address": { 00:16:34.633 "trtype": "TCP", 00:16:34.633 "adrfam": "IPv4", 00:16:34.633 "traddr": "10.0.0.1", 00:16:34.633 "trsvcid": "37764" 00:16:34.633 }, 00:16:34.633 "auth": { 00:16:34.633 "state": "completed", 00:16:34.633 "digest": "sha384", 00:16:34.633 "dhgroup": "ffdhe4096" 00:16:34.633 } 00:16:34.633 } 00:16:34.633 ]' 00:16:34.633 14:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:34.633 14:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:34.633 14:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:34.633 14:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:34.633 14:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:34.893 14:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.893 14:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.893 14:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:35.152 14:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjBiNzUzNDk5NTJiMDI4YWY5YjAxNGE2M2JhOWY3MGM5NjdhNjYwN2QxZGEyYjVk1hdk9g==: --dhchap-ctrl-secret DHHC-1:03:MWUxNmY4Njk5NmQ3NzkwNWEyYTk0ZjM5Yzg0NDU1MDkzYTMyYTkzZGRhOGYzYThmZDk0NjM3YmI2MzU3ZGU2OKKEk9A=: 00:16:35.152 14:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid 406d54d0-5e94-472a-a2b3-4291f3ac81e0 -l 0 --dhchap-secret DHHC-1:00:MjBiNzUzNDk5NTJiMDI4YWY5YjAxNGE2M2JhOWY3MGM5NjdhNjYwN2QxZGEyYjVk1hdk9g==: --dhchap-ctrl-secret DHHC-1:03:MWUxNmY4Njk5NmQ3NzkwNWEyYTk0ZjM5Yzg0NDU1MDkzYTMyYTkzZGRhOGYzYThmZDk0NjM3YmI2MzU3ZGU2OKKEk9A=: 00:16:35.720 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.720 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.720 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:16:35.720 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.720 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.720 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.720 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:35.720 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:35.720 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:35.979 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:16:35.979 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:35.979 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:35.979 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:35.979 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:35.979 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.979 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:35.979 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.979 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.979 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.979 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:35.979 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:35.979 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:36.238 00:16:36.238 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:36.238 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:36.238 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.497 14:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.497 14:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.497 14:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.497 14:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.497 14:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.497 14:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:36.497 { 00:16:36.497 "cntlid": 75, 00:16:36.497 "qid": 0, 00:16:36.497 "state": "enabled", 00:16:36.497 "thread": "nvmf_tgt_poll_group_000", 00:16:36.497 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0", 00:16:36.497 "listen_address": { 00:16:36.497 "trtype": "TCP", 00:16:36.497 "adrfam": "IPv4", 00:16:36.497 "traddr": "10.0.0.3", 00:16:36.497 "trsvcid": "4420" 00:16:36.497 }, 00:16:36.497 "peer_address": { 00:16:36.497 "trtype": "TCP", 00:16:36.497 "adrfam": "IPv4", 00:16:36.497 "traddr": "10.0.0.1", 00:16:36.497 "trsvcid": "37778" 00:16:36.497 }, 00:16:36.497 "auth": { 00:16:36.497 "state": "completed", 00:16:36.497 "digest": "sha384", 00:16:36.497 "dhgroup": "ffdhe4096" 00:16:36.497 } 00:16:36.497 } 00:16:36.497 ]' 00:16:36.497 14:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:36.497 14:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:36.497 14:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:36.497 14:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:36.497 14:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:36.756 14:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.756 14:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.756 14:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.015 14:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTkwMmMwOGY3YTdkMGZjNWQ0ODJhMDk2ZGRmZjlmMjZ64U1I: --dhchap-ctrl-secret DHHC-1:02:MGE3OWQyMmNmNGIwMzA2NTgzYjhmMDM4NDRhZWZkMTBmM2YxYzdmZTBlNDRkYjExHkTEaw==: 00:16:37.015 14:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid 406d54d0-5e94-472a-a2b3-4291f3ac81e0 -l 0 --dhchap-secret DHHC-1:01:YTkwMmMwOGY3YTdkMGZjNWQ0ODJhMDk2ZGRmZjlmMjZ64U1I: --dhchap-ctrl-secret DHHC-1:02:MGE3OWQyMmNmNGIwMzA2NTgzYjhmMDM4NDRhZWZkMTBmM2YxYzdmZTBlNDRkYjExHkTEaw==: 00:16:37.584 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.584 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.584 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:16:37.584 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.584 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.584 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.584 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:37.584 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:37.584 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:37.843 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:16:37.843 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:37.843 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:37.843 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:37.843 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:37.843 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.843 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:37.843 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.843 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.843 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.843 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:37.843 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:37.843 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.102 00:16:38.102 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:38.102 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:38.102 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.361 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.361 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.361 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.361 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.361 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.361 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:38.361 { 00:16:38.361 "cntlid": 77, 00:16:38.361 "qid": 0, 00:16:38.361 "state": "enabled", 00:16:38.361 "thread": "nvmf_tgt_poll_group_000", 00:16:38.361 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0", 00:16:38.361 "listen_address": { 00:16:38.361 "trtype": "TCP", 00:16:38.361 "adrfam": "IPv4", 00:16:38.361 "traddr": "10.0.0.3", 00:16:38.361 "trsvcid": "4420" 00:16:38.361 }, 00:16:38.361 "peer_address": { 00:16:38.361 "trtype": "TCP", 00:16:38.361 "adrfam": "IPv4", 00:16:38.361 "traddr": "10.0.0.1", 00:16:38.361 "trsvcid": "37802" 00:16:38.361 }, 00:16:38.361 "auth": { 00:16:38.361 "state": "completed", 00:16:38.361 "digest": "sha384", 00:16:38.361 "dhgroup": "ffdhe4096" 00:16:38.361 } 00:16:38.361 } 00:16:38.361 ]' 00:16:38.361 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:38.361 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:38.361 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:38.361 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:38.361 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:38.361 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.361 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.361 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.621 14:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTVjZjM5ZDQ2MWRmOGUzODllYjM2MzVlNDMwOTc1NjA0NzQ0OWFmOWIyZmZlYzlmWliPlg==: --dhchap-ctrl-secret DHHC-1:01:ZWEwMjQzYmM0ODJlZDU5OTBhYjZkOWNmMjIyN2ZkZTGcv5Bp: 00:16:38.621 14:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid 406d54d0-5e94-472a-a2b3-4291f3ac81e0 -l 0 --dhchap-secret DHHC-1:02:NTVjZjM5ZDQ2MWRmOGUzODllYjM2MzVlNDMwOTc1NjA0NzQ0OWFmOWIyZmZlYzlmWliPlg==: --dhchap-ctrl-secret DHHC-1:01:ZWEwMjQzYmM0ODJlZDU5OTBhYjZkOWNmMjIyN2ZkZTGcv5Bp: 00:16:39.189 14:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.189 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.189 14:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:16:39.189 14:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.189 14:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.189 14:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.189 14:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:39.189 14:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:39.189 14:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:39.448 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:16:39.448 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:39.448 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:39.448 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:39.448 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:39.448 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:39.448 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --dhchap-key key3 00:16:39.448 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.448 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.448 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.448 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:39.448 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:39.448 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:40.017 00:16:40.017 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:40.017 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:40.017 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.276 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.276 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.276 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.276 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.276 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.276 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:40.276 { 00:16:40.276 "cntlid": 79, 00:16:40.276 "qid": 0, 00:16:40.276 "state": "enabled", 00:16:40.276 "thread": "nvmf_tgt_poll_group_000", 00:16:40.276 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0", 00:16:40.276 "listen_address": { 00:16:40.276 "trtype": "TCP", 00:16:40.276 "adrfam": "IPv4", 00:16:40.276 "traddr": "10.0.0.3", 00:16:40.276 "trsvcid": "4420" 00:16:40.276 }, 00:16:40.276 "peer_address": { 00:16:40.276 "trtype": "TCP", 00:16:40.276 "adrfam": "IPv4", 00:16:40.276 "traddr": "10.0.0.1", 00:16:40.276 "trsvcid": "37836" 00:16:40.276 }, 00:16:40.276 "auth": { 00:16:40.276 "state": "completed", 00:16:40.276 "digest": "sha384", 00:16:40.276 "dhgroup": "ffdhe4096" 00:16:40.276 } 00:16:40.276 } 00:16:40.276 ]' 00:16:40.276 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:40.276 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:40.276 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:40.276 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:40.276 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:40.276 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.276 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.276 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:40.535 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjYwYjE4ODZiZGUzYmE2ZWI3NGMxZDE3NjI3YWRhYjhkMDVlM2Q4ZDNjYTI1MWVhZjkwNzY0YWM3ZmMyZmQzZImPBGY=: 00:16:40.535 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid 406d54d0-5e94-472a-a2b3-4291f3ac81e0 -l 0 --dhchap-secret DHHC-1:03:YjYwYjE4ODZiZGUzYmE2ZWI3NGMxZDE3NjI3YWRhYjhkMDVlM2Q4ZDNjYTI1MWVhZjkwNzY0YWM3ZmMyZmQzZImPBGY=: 00:16:41.101 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.101 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.101 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:16:41.101 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.101 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.101 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.101 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:41.101 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:41.101 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:41.101 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:41.360 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:16:41.360 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:41.360 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:41.360 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:41.360 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:41.360 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.360 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.360 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.360 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.360 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.360 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.360 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.360 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.928 00:16:41.928 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:41.928 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.928 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:41.928 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.190 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.190 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.190 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.190 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.190 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:42.190 { 00:16:42.190 "cntlid": 81, 00:16:42.190 "qid": 0, 00:16:42.190 "state": "enabled", 00:16:42.190 "thread": "nvmf_tgt_poll_group_000", 00:16:42.190 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0", 00:16:42.190 "listen_address": { 00:16:42.190 "trtype": "TCP", 00:16:42.190 "adrfam": "IPv4", 00:16:42.190 "traddr": "10.0.0.3", 00:16:42.190 "trsvcid": "4420" 00:16:42.190 }, 00:16:42.190 "peer_address": { 00:16:42.190 "trtype": "TCP", 00:16:42.190 "adrfam": "IPv4", 00:16:42.190 "traddr": "10.0.0.1", 00:16:42.190 "trsvcid": "37866" 00:16:42.190 }, 00:16:42.190 "auth": { 00:16:42.190 "state": "completed", 00:16:42.190 "digest": "sha384", 00:16:42.190 "dhgroup": "ffdhe6144" 00:16:42.190 } 00:16:42.190 } 00:16:42.190 ]' 00:16:42.190 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:42.190 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:42.190 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:42.190 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:42.190 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:42.190 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.190 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.190 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.454 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjBiNzUzNDk5NTJiMDI4YWY5YjAxNGE2M2JhOWY3MGM5NjdhNjYwN2QxZGEyYjVk1hdk9g==: --dhchap-ctrl-secret DHHC-1:03:MWUxNmY4Njk5NmQ3NzkwNWEyYTk0ZjM5Yzg0NDU1MDkzYTMyYTkzZGRhOGYzYThmZDk0NjM3YmI2MzU3ZGU2OKKEk9A=: 00:16:42.454 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid 406d54d0-5e94-472a-a2b3-4291f3ac81e0 -l 0 --dhchap-secret DHHC-1:00:MjBiNzUzNDk5NTJiMDI4YWY5YjAxNGE2M2JhOWY3MGM5NjdhNjYwN2QxZGEyYjVk1hdk9g==: --dhchap-ctrl-secret DHHC-1:03:MWUxNmY4Njk5NmQ3NzkwNWEyYTk0ZjM5Yzg0NDU1MDkzYTMyYTkzZGRhOGYzYThmZDk0NjM3YmI2MzU3ZGU2OKKEk9A=: 00:16:43.020 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.020 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.020 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:16:43.020 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.020 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.020 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.020 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:43.020 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:43.020 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:43.278 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:16:43.278 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:43.278 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:43.278 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:43.278 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:43.278 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.278 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.278 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.278 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.278 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.278 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.278 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.278 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.844 00:16:43.844 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:43.844 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:43.844 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.102 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.102 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.102 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.102 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.102 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.102 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:44.102 { 00:16:44.102 "cntlid": 83, 00:16:44.102 "qid": 0, 00:16:44.102 "state": "enabled", 00:16:44.102 "thread": "nvmf_tgt_poll_group_000", 00:16:44.102 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0", 00:16:44.102 "listen_address": { 00:16:44.102 "trtype": "TCP", 00:16:44.102 "adrfam": "IPv4", 00:16:44.102 "traddr": "10.0.0.3", 00:16:44.102 "trsvcid": "4420" 00:16:44.102 }, 00:16:44.102 "peer_address": { 00:16:44.102 "trtype": "TCP", 00:16:44.102 "adrfam": "IPv4", 00:16:44.102 "traddr": "10.0.0.1", 00:16:44.102 "trsvcid": "32986" 00:16:44.102 }, 00:16:44.102 "auth": { 00:16:44.102 "state": "completed", 00:16:44.102 "digest": "sha384", 00:16:44.102 "dhgroup": "ffdhe6144" 00:16:44.102 } 00:16:44.102 } 00:16:44.102 ]' 00:16:44.102 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:44.102 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:44.102 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:44.102 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:44.102 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:44.103 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.103 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.103 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.360 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTkwMmMwOGY3YTdkMGZjNWQ0ODJhMDk2ZGRmZjlmMjZ64U1I: --dhchap-ctrl-secret DHHC-1:02:MGE3OWQyMmNmNGIwMzA2NTgzYjhmMDM4NDRhZWZkMTBmM2YxYzdmZTBlNDRkYjExHkTEaw==: 00:16:44.361 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid 406d54d0-5e94-472a-a2b3-4291f3ac81e0 -l 0 --dhchap-secret DHHC-1:01:YTkwMmMwOGY3YTdkMGZjNWQ0ODJhMDk2ZGRmZjlmMjZ64U1I: --dhchap-ctrl-secret DHHC-1:02:MGE3OWQyMmNmNGIwMzA2NTgzYjhmMDM4NDRhZWZkMTBmM2YxYzdmZTBlNDRkYjExHkTEaw==: 00:16:44.927 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.927 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.927 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:16:44.927 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.927 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.927 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.927 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:44.927 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:44.927 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:45.185 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:16:45.185 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:45.185 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:45.185 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:45.185 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:45.185 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.186 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:45.186 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.186 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.186 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.186 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:45.186 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:45.186 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:45.443 00:16:45.701 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:45.701 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:45.701 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.701 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.701 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.701 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.701 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.960 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.960 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:45.960 { 00:16:45.960 "cntlid": 85, 00:16:45.960 "qid": 0, 00:16:45.960 "state": "enabled", 00:16:45.960 "thread": "nvmf_tgt_poll_group_000", 00:16:45.960 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0", 00:16:45.960 "listen_address": { 00:16:45.960 "trtype": "TCP", 00:16:45.960 "adrfam": "IPv4", 00:16:45.960 "traddr": "10.0.0.3", 00:16:45.960 "trsvcid": "4420" 00:16:45.960 }, 00:16:45.960 "peer_address": { 00:16:45.960 "trtype": "TCP", 00:16:45.960 "adrfam": "IPv4", 00:16:45.960 "traddr": "10.0.0.1", 00:16:45.960 "trsvcid": "33016" 00:16:45.960 }, 00:16:45.960 "auth": { 00:16:45.960 "state": "completed", 00:16:45.960 "digest": "sha384", 00:16:45.960 "dhgroup": "ffdhe6144" 00:16:45.960 } 00:16:45.960 } 00:16:45.960 ]' 00:16:45.960 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:45.960 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:45.960 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:45.960 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:45.960 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:45.960 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.960 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.960 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.218 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTVjZjM5ZDQ2MWRmOGUzODllYjM2MzVlNDMwOTc1NjA0NzQ0OWFmOWIyZmZlYzlmWliPlg==: --dhchap-ctrl-secret DHHC-1:01:ZWEwMjQzYmM0ODJlZDU5OTBhYjZkOWNmMjIyN2ZkZTGcv5Bp: 00:16:46.218 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid 406d54d0-5e94-472a-a2b3-4291f3ac81e0 -l 0 --dhchap-secret DHHC-1:02:NTVjZjM5ZDQ2MWRmOGUzODllYjM2MzVlNDMwOTc1NjA0NzQ0OWFmOWIyZmZlYzlmWliPlg==: --dhchap-ctrl-secret DHHC-1:01:ZWEwMjQzYmM0ODJlZDU5OTBhYjZkOWNmMjIyN2ZkZTGcv5Bp: 00:16:46.784 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.784 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.784 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:16:46.784 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.784 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.784 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.784 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:46.784 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:46.784 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:47.042 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:16:47.043 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:47.043 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:47.043 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:47.043 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:47.043 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:47.043 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --dhchap-key key3 00:16:47.043 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.043 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.043 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.043 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:47.043 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:47.043 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:47.301 00:16:47.559 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:47.559 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:47.559 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.559 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.559 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.559 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.559 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.559 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.559 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:47.559 { 00:16:47.559 "cntlid": 87, 00:16:47.559 "qid": 0, 00:16:47.559 "state": "enabled", 00:16:47.559 "thread": "nvmf_tgt_poll_group_000", 00:16:47.559 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0", 00:16:47.559 "listen_address": { 00:16:47.559 "trtype": "TCP", 00:16:47.559 "adrfam": "IPv4", 00:16:47.559 "traddr": "10.0.0.3", 00:16:47.559 "trsvcid": "4420" 00:16:47.559 }, 00:16:47.559 "peer_address": { 00:16:47.559 "trtype": "TCP", 00:16:47.559 "adrfam": "IPv4", 00:16:47.559 "traddr": "10.0.0.1", 00:16:47.559 "trsvcid": "33034" 00:16:47.559 }, 00:16:47.559 "auth": { 00:16:47.559 "state": "completed", 00:16:47.559 "digest": "sha384", 00:16:47.559 "dhgroup": "ffdhe6144" 00:16:47.559 } 00:16:47.559 } 00:16:47.559 ]' 00:16:47.560 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:47.818 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:47.818 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:47.818 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:47.818 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:47.818 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.818 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.818 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.076 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjYwYjE4ODZiZGUzYmE2ZWI3NGMxZDE3NjI3YWRhYjhkMDVlM2Q4ZDNjYTI1MWVhZjkwNzY0YWM3ZmMyZmQzZImPBGY=: 00:16:48.076 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid 406d54d0-5e94-472a-a2b3-4291f3ac81e0 -l 0 --dhchap-secret DHHC-1:03:YjYwYjE4ODZiZGUzYmE2ZWI3NGMxZDE3NjI3YWRhYjhkMDVlM2Q4ZDNjYTI1MWVhZjkwNzY0YWM3ZmMyZmQzZImPBGY=: 00:16:48.642 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.642 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.642 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:16:48.642 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.642 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.642 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.642 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:48.642 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:48.642 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:48.642 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:48.901 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:16:48.901 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:48.901 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:48.901 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:48.901 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:48.901 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:48.901 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.901 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.901 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.901 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.901 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.901 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.901 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.468 00:16:49.468 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:49.468 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:49.468 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.726 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.726 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.726 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.726 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.726 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.726 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:49.726 { 00:16:49.726 "cntlid": 89, 00:16:49.726 "qid": 0, 00:16:49.726 "state": "enabled", 00:16:49.726 "thread": "nvmf_tgt_poll_group_000", 00:16:49.726 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0", 00:16:49.726 "listen_address": { 00:16:49.726 "trtype": "TCP", 00:16:49.726 "adrfam": "IPv4", 00:16:49.726 "traddr": "10.0.0.3", 00:16:49.726 "trsvcid": "4420" 00:16:49.726 }, 00:16:49.726 "peer_address": { 00:16:49.726 "trtype": "TCP", 00:16:49.726 "adrfam": "IPv4", 00:16:49.726 "traddr": "10.0.0.1", 00:16:49.726 "trsvcid": "33064" 00:16:49.726 }, 00:16:49.726 "auth": { 00:16:49.726 "state": "completed", 00:16:49.726 "digest": "sha384", 00:16:49.726 "dhgroup": "ffdhe8192" 00:16:49.726 } 00:16:49.726 } 00:16:49.726 ]' 00:16:49.726 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:49.726 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:49.726 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:49.726 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:49.726 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:49.984 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.984 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.984 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.984 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjBiNzUzNDk5NTJiMDI4YWY5YjAxNGE2M2JhOWY3MGM5NjdhNjYwN2QxZGEyYjVk1hdk9g==: --dhchap-ctrl-secret DHHC-1:03:MWUxNmY4Njk5NmQ3NzkwNWEyYTk0ZjM5Yzg0NDU1MDkzYTMyYTkzZGRhOGYzYThmZDk0NjM3YmI2MzU3ZGU2OKKEk9A=: 00:16:49.984 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid 406d54d0-5e94-472a-a2b3-4291f3ac81e0 -l 0 --dhchap-secret DHHC-1:00:MjBiNzUzNDk5NTJiMDI4YWY5YjAxNGE2M2JhOWY3MGM5NjdhNjYwN2QxZGEyYjVk1hdk9g==: --dhchap-ctrl-secret DHHC-1:03:MWUxNmY4Njk5NmQ3NzkwNWEyYTk0ZjM5Yzg0NDU1MDkzYTMyYTkzZGRhOGYzYThmZDk0NjM3YmI2MzU3ZGU2OKKEk9A=: 00:16:50.551 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.809 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.809 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:16:50.809 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.809 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.809 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.809 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:50.809 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:50.809 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:50.809 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:16:50.809 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:50.809 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:50.809 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:50.809 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:50.809 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.809 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.809 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.809 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.809 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.809 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.809 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.809 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.375 00:16:51.375 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:51.375 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:51.375 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.633 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.633 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.633 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.633 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.633 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.633 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:51.633 { 00:16:51.633 "cntlid": 91, 00:16:51.634 "qid": 0, 00:16:51.634 "state": "enabled", 00:16:51.634 "thread": "nvmf_tgt_poll_group_000", 00:16:51.634 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0", 00:16:51.634 "listen_address": { 00:16:51.634 "trtype": "TCP", 00:16:51.634 "adrfam": "IPv4", 00:16:51.634 "traddr": "10.0.0.3", 00:16:51.634 "trsvcid": "4420" 00:16:51.634 }, 00:16:51.634 "peer_address": { 00:16:51.634 "trtype": "TCP", 00:16:51.634 "adrfam": "IPv4", 00:16:51.634 "traddr": "10.0.0.1", 00:16:51.634 "trsvcid": "33106" 00:16:51.634 }, 00:16:51.634 "auth": { 00:16:51.634 "state": "completed", 00:16:51.634 "digest": "sha384", 00:16:51.634 "dhgroup": "ffdhe8192" 00:16:51.634 } 00:16:51.634 } 00:16:51.634 ]' 00:16:51.634 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:51.891 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:51.891 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:51.891 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:51.891 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:51.891 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.891 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.891 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.150 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTkwMmMwOGY3YTdkMGZjNWQ0ODJhMDk2ZGRmZjlmMjZ64U1I: --dhchap-ctrl-secret DHHC-1:02:MGE3OWQyMmNmNGIwMzA2NTgzYjhmMDM4NDRhZWZkMTBmM2YxYzdmZTBlNDRkYjExHkTEaw==: 00:16:52.150 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid 406d54d0-5e94-472a-a2b3-4291f3ac81e0 -l 0 --dhchap-secret DHHC-1:01:YTkwMmMwOGY3YTdkMGZjNWQ0ODJhMDk2ZGRmZjlmMjZ64U1I: --dhchap-ctrl-secret DHHC-1:02:MGE3OWQyMmNmNGIwMzA2NTgzYjhmMDM4NDRhZWZkMTBmM2YxYzdmZTBlNDRkYjExHkTEaw==: 00:16:52.715 14:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.715 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.716 14:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:16:52.716 14:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.716 14:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.716 14:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.716 14:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:52.716 14:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:52.716 14:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:52.974 14:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:16:52.974 14:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:52.974 14:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:52.974 14:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:52.974 14:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:52.974 14:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.974 14:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.974 14:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.974 14:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.974 14:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.974 14:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.974 14:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.974 14:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.540 00:16:53.540 14:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:53.540 14:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:53.540 14:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.799 14:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.799 14:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.799 14:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.799 14:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.799 14:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.799 14:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:53.799 { 00:16:53.799 "cntlid": 93, 00:16:53.799 "qid": 0, 00:16:53.799 "state": "enabled", 00:16:53.799 "thread": "nvmf_tgt_poll_group_000", 00:16:53.799 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0", 00:16:53.799 "listen_address": { 00:16:53.799 "trtype": "TCP", 00:16:53.799 "adrfam": "IPv4", 00:16:53.799 "traddr": "10.0.0.3", 00:16:53.799 "trsvcid": "4420" 00:16:53.799 }, 00:16:53.799 "peer_address": { 00:16:53.799 "trtype": "TCP", 00:16:53.799 "adrfam": "IPv4", 00:16:53.799 "traddr": "10.0.0.1", 00:16:53.799 "trsvcid": "33132" 00:16:53.799 }, 00:16:53.799 "auth": { 00:16:53.799 "state": "completed", 00:16:53.799 "digest": "sha384", 00:16:53.799 "dhgroup": "ffdhe8192" 00:16:53.799 } 00:16:53.799 } 00:16:53.799 ]' 00:16:53.799 14:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:53.799 14:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:53.799 14:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:53.799 14:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:53.799 14:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:53.799 14:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.799 14:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.799 14:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.057 14:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTVjZjM5ZDQ2MWRmOGUzODllYjM2MzVlNDMwOTc1NjA0NzQ0OWFmOWIyZmZlYzlmWliPlg==: --dhchap-ctrl-secret DHHC-1:01:ZWEwMjQzYmM0ODJlZDU5OTBhYjZkOWNmMjIyN2ZkZTGcv5Bp: 00:16:54.057 14:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid 406d54d0-5e94-472a-a2b3-4291f3ac81e0 -l 0 --dhchap-secret DHHC-1:02:NTVjZjM5ZDQ2MWRmOGUzODllYjM2MzVlNDMwOTc1NjA0NzQ0OWFmOWIyZmZlYzlmWliPlg==: --dhchap-ctrl-secret DHHC-1:01:ZWEwMjQzYmM0ODJlZDU5OTBhYjZkOWNmMjIyN2ZkZTGcv5Bp: 00:16:54.622 14:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.622 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.622 14:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:16:54.622 14:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.622 14:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.622 14:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.622 14:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:54.622 14:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:54.622 14:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:54.881 14:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:16:54.881 14:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:54.881 14:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:54.881 14:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:54.881 14:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:54.881 14:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.881 14:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --dhchap-key key3 00:16:54.881 14:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.881 14:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.881 14:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.881 14:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:54.881 14:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:54.881 14:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:55.446 00:16:55.446 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:55.447 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:55.447 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.705 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.705 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.705 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.705 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.705 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.705 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:55.705 { 00:16:55.705 "cntlid": 95, 00:16:55.705 "qid": 0, 00:16:55.705 "state": "enabled", 00:16:55.705 "thread": "nvmf_tgt_poll_group_000", 00:16:55.705 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0", 00:16:55.705 "listen_address": { 00:16:55.705 "trtype": "TCP", 00:16:55.705 "adrfam": "IPv4", 00:16:55.705 "traddr": "10.0.0.3", 00:16:55.705 "trsvcid": "4420" 00:16:55.705 }, 00:16:55.705 "peer_address": { 00:16:55.705 "trtype": "TCP", 00:16:55.705 "adrfam": "IPv4", 00:16:55.705 "traddr": "10.0.0.1", 00:16:55.705 "trsvcid": "41762" 00:16:55.705 }, 00:16:55.705 "auth": { 00:16:55.705 "state": "completed", 00:16:55.705 "digest": "sha384", 00:16:55.705 "dhgroup": "ffdhe8192" 00:16:55.705 } 00:16:55.705 } 00:16:55.705 ]' 00:16:55.705 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:55.705 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:55.963 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:55.963 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:55.963 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:55.963 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.963 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.963 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.221 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjYwYjE4ODZiZGUzYmE2ZWI3NGMxZDE3NjI3YWRhYjhkMDVlM2Q4ZDNjYTI1MWVhZjkwNzY0YWM3ZmMyZmQzZImPBGY=: 00:16:56.221 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid 406d54d0-5e94-472a-a2b3-4291f3ac81e0 -l 0 --dhchap-secret DHHC-1:03:YjYwYjE4ODZiZGUzYmE2ZWI3NGMxZDE3NjI3YWRhYjhkMDVlM2Q4ZDNjYTI1MWVhZjkwNzY0YWM3ZmMyZmQzZImPBGY=: 00:16:56.787 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.787 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.787 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:16:56.787 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.787 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.787 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.787 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:56.787 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:56.787 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:56.787 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:56.787 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:57.045 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:16:57.045 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:57.045 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:57.045 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:57.045 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:57.045 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.045 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.045 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.045 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.045 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.045 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.045 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.045 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.310 00:16:57.310 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:57.310 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:57.310 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.569 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.569 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.569 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.569 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.569 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.569 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:57.569 { 00:16:57.569 "cntlid": 97, 00:16:57.569 "qid": 0, 00:16:57.569 "state": "enabled", 00:16:57.569 "thread": "nvmf_tgt_poll_group_000", 00:16:57.569 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0", 00:16:57.569 "listen_address": { 00:16:57.569 "trtype": "TCP", 00:16:57.569 "adrfam": "IPv4", 00:16:57.569 "traddr": "10.0.0.3", 00:16:57.569 "trsvcid": "4420" 00:16:57.569 }, 00:16:57.569 "peer_address": { 00:16:57.569 "trtype": "TCP", 00:16:57.569 "adrfam": "IPv4", 00:16:57.569 "traddr": "10.0.0.1", 00:16:57.569 "trsvcid": "41796" 00:16:57.569 }, 00:16:57.569 "auth": { 00:16:57.569 "state": "completed", 00:16:57.569 "digest": "sha512", 00:16:57.569 "dhgroup": "null" 00:16:57.569 } 00:16:57.569 } 00:16:57.569 ]' 00:16:57.569 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:57.569 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:57.569 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:57.569 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:57.569 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:57.569 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.569 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.569 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.135 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjBiNzUzNDk5NTJiMDI4YWY5YjAxNGE2M2JhOWY3MGM5NjdhNjYwN2QxZGEyYjVk1hdk9g==: --dhchap-ctrl-secret DHHC-1:03:MWUxNmY4Njk5NmQ3NzkwNWEyYTk0ZjM5Yzg0NDU1MDkzYTMyYTkzZGRhOGYzYThmZDk0NjM3YmI2MzU3ZGU2OKKEk9A=: 00:16:58.136 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid 406d54d0-5e94-472a-a2b3-4291f3ac81e0 -l 0 --dhchap-secret DHHC-1:00:MjBiNzUzNDk5NTJiMDI4YWY5YjAxNGE2M2JhOWY3MGM5NjdhNjYwN2QxZGEyYjVk1hdk9g==: --dhchap-ctrl-secret DHHC-1:03:MWUxNmY4Njk5NmQ3NzkwNWEyYTk0ZjM5Yzg0NDU1MDkzYTMyYTkzZGRhOGYzYThmZDk0NjM3YmI2MzU3ZGU2OKKEk9A=: 00:16:58.702 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.702 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.702 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:16:58.702 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.702 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.702 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.702 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:58.702 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:58.702 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:58.961 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:16:58.961 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:58.961 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:58.961 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:58.961 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:58.961 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.961 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.961 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.961 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.961 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.961 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.961 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.961 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.219 00:16:59.219 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:59.219 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:59.220 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.478 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.478 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.478 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.478 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.478 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.478 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:59.478 { 00:16:59.478 "cntlid": 99, 00:16:59.478 "qid": 0, 00:16:59.478 "state": "enabled", 00:16:59.478 "thread": "nvmf_tgt_poll_group_000", 00:16:59.478 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0", 00:16:59.478 "listen_address": { 00:16:59.478 "trtype": "TCP", 00:16:59.478 "adrfam": "IPv4", 00:16:59.478 "traddr": "10.0.0.3", 00:16:59.478 "trsvcid": "4420" 00:16:59.478 }, 00:16:59.478 "peer_address": { 00:16:59.478 "trtype": "TCP", 00:16:59.478 "adrfam": "IPv4", 00:16:59.478 "traddr": "10.0.0.1", 00:16:59.478 "trsvcid": "41814" 00:16:59.478 }, 00:16:59.478 "auth": { 00:16:59.478 "state": "completed", 00:16:59.478 "digest": "sha512", 00:16:59.478 "dhgroup": "null" 00:16:59.478 } 00:16:59.478 } 00:16:59.478 ]' 00:16:59.478 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:59.478 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:59.478 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:59.478 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:59.478 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:59.478 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.478 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.478 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.737 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTkwMmMwOGY3YTdkMGZjNWQ0ODJhMDk2ZGRmZjlmMjZ64U1I: --dhchap-ctrl-secret DHHC-1:02:MGE3OWQyMmNmNGIwMzA2NTgzYjhmMDM4NDRhZWZkMTBmM2YxYzdmZTBlNDRkYjExHkTEaw==: 00:16:59.737 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid 406d54d0-5e94-472a-a2b3-4291f3ac81e0 -l 0 --dhchap-secret DHHC-1:01:YTkwMmMwOGY3YTdkMGZjNWQ0ODJhMDk2ZGRmZjlmMjZ64U1I: --dhchap-ctrl-secret DHHC-1:02:MGE3OWQyMmNmNGIwMzA2NTgzYjhmMDM4NDRhZWZkMTBmM2YxYzdmZTBlNDRkYjExHkTEaw==: 00:17:00.304 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.304 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.304 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:17:00.304 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.304 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.304 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.304 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:00.304 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:00.304 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:00.562 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:17:00.562 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:00.562 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:00.562 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:00.562 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:00.562 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.562 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.562 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.562 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.562 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.562 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.562 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.563 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.820 00:17:01.078 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:01.078 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:01.078 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.078 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.078 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:01.078 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.078 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.337 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.337 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:01.337 { 00:17:01.337 "cntlid": 101, 00:17:01.337 "qid": 0, 00:17:01.337 "state": "enabled", 00:17:01.337 "thread": "nvmf_tgt_poll_group_000", 00:17:01.337 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0", 00:17:01.337 "listen_address": { 00:17:01.337 "trtype": "TCP", 00:17:01.337 "adrfam": "IPv4", 00:17:01.337 "traddr": "10.0.0.3", 00:17:01.337 "trsvcid": "4420" 00:17:01.337 }, 00:17:01.337 "peer_address": { 00:17:01.337 "trtype": "TCP", 00:17:01.337 "adrfam": "IPv4", 00:17:01.337 "traddr": "10.0.0.1", 00:17:01.337 "trsvcid": "41844" 00:17:01.337 }, 00:17:01.337 "auth": { 00:17:01.337 "state": "completed", 00:17:01.337 "digest": "sha512", 00:17:01.337 "dhgroup": "null" 00:17:01.337 } 00:17:01.337 } 00:17:01.337 ]' 00:17:01.337 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:01.337 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:01.337 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:01.337 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:01.337 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:01.337 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.337 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.337 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.595 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTVjZjM5ZDQ2MWRmOGUzODllYjM2MzVlNDMwOTc1NjA0NzQ0OWFmOWIyZmZlYzlmWliPlg==: --dhchap-ctrl-secret DHHC-1:01:ZWEwMjQzYmM0ODJlZDU5OTBhYjZkOWNmMjIyN2ZkZTGcv5Bp: 00:17:01.595 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid 406d54d0-5e94-472a-a2b3-4291f3ac81e0 -l 0 --dhchap-secret DHHC-1:02:NTVjZjM5ZDQ2MWRmOGUzODllYjM2MzVlNDMwOTc1NjA0NzQ0OWFmOWIyZmZlYzlmWliPlg==: --dhchap-ctrl-secret DHHC-1:01:ZWEwMjQzYmM0ODJlZDU5OTBhYjZkOWNmMjIyN2ZkZTGcv5Bp: 00:17:02.165 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.165 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.165 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:17:02.165 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.165 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.165 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.165 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:02.165 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:02.165 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:02.423 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:17:02.423 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:02.423 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:02.423 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:02.423 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:02.423 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:02.423 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --dhchap-key key3 00:17:02.423 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.423 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.423 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.423 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:02.423 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:02.423 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:02.681 00:17:02.681 14:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:02.681 14:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:02.681 14:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.940 14:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.940 14:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.940 14:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.940 14:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.940 14:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.940 14:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:02.940 { 00:17:02.940 "cntlid": 103, 00:17:02.940 "qid": 0, 00:17:02.940 "state": "enabled", 00:17:02.940 "thread": "nvmf_tgt_poll_group_000", 00:17:02.940 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0", 00:17:02.940 "listen_address": { 00:17:02.940 "trtype": "TCP", 00:17:02.940 "adrfam": "IPv4", 00:17:02.940 "traddr": "10.0.0.3", 00:17:02.940 "trsvcid": "4420" 00:17:02.940 }, 00:17:02.940 "peer_address": { 00:17:02.940 "trtype": "TCP", 00:17:02.940 "adrfam": "IPv4", 00:17:02.940 "traddr": "10.0.0.1", 00:17:02.940 "trsvcid": "41862" 00:17:02.940 }, 00:17:02.940 "auth": { 00:17:02.940 "state": "completed", 00:17:02.940 "digest": "sha512", 00:17:02.940 "dhgroup": "null" 00:17:02.940 } 00:17:02.940 } 00:17:02.940 ]' 00:17:03.198 14:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:03.198 14:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:03.198 14:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:03.198 14:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:03.198 14:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:03.198 14:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:03.198 14:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.198 14:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.456 14:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjYwYjE4ODZiZGUzYmE2ZWI3NGMxZDE3NjI3YWRhYjhkMDVlM2Q4ZDNjYTI1MWVhZjkwNzY0YWM3ZmMyZmQzZImPBGY=: 00:17:03.456 14:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid 406d54d0-5e94-472a-a2b3-4291f3ac81e0 -l 0 --dhchap-secret DHHC-1:03:YjYwYjE4ODZiZGUzYmE2ZWI3NGMxZDE3NjI3YWRhYjhkMDVlM2Q4ZDNjYTI1MWVhZjkwNzY0YWM3ZmMyZmQzZImPBGY=: 00:17:04.023 14:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:04.023 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:04.023 14:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:17:04.023 14:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.023 14:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.023 14:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.023 14:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:04.023 14:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:04.023 14:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:04.023 14:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:04.281 14:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:17:04.281 14:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:04.281 14:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:04.281 14:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:04.281 14:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:04.281 14:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:04.281 14:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:04.281 14:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.281 14:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.281 14:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.281 14:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:04.282 14:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:04.282 14:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:04.540 00:17:04.540 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:04.540 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.540 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:04.797 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.797 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.797 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.797 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.797 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.797 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:04.797 { 00:17:04.797 "cntlid": 105, 00:17:04.797 "qid": 0, 00:17:04.797 "state": "enabled", 00:17:04.798 "thread": "nvmf_tgt_poll_group_000", 00:17:04.798 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0", 00:17:04.798 "listen_address": { 00:17:04.798 "trtype": "TCP", 00:17:04.798 "adrfam": "IPv4", 00:17:04.798 "traddr": "10.0.0.3", 00:17:04.798 "trsvcid": "4420" 00:17:04.798 }, 00:17:04.798 "peer_address": { 00:17:04.798 "trtype": "TCP", 00:17:04.798 "adrfam": "IPv4", 00:17:04.798 "traddr": "10.0.0.1", 00:17:04.798 "trsvcid": "38212" 00:17:04.798 }, 00:17:04.798 "auth": { 00:17:04.798 "state": "completed", 00:17:04.798 "digest": "sha512", 00:17:04.798 "dhgroup": "ffdhe2048" 00:17:04.798 } 00:17:04.798 } 00:17:04.798 ]' 00:17:04.798 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:04.798 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:04.798 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:05.056 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:05.056 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:05.056 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:05.056 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.056 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.314 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjBiNzUzNDk5NTJiMDI4YWY5YjAxNGE2M2JhOWY3MGM5NjdhNjYwN2QxZGEyYjVk1hdk9g==: --dhchap-ctrl-secret DHHC-1:03:MWUxNmY4Njk5NmQ3NzkwNWEyYTk0ZjM5Yzg0NDU1MDkzYTMyYTkzZGRhOGYzYThmZDk0NjM3YmI2MzU3ZGU2OKKEk9A=: 00:17:05.314 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid 406d54d0-5e94-472a-a2b3-4291f3ac81e0 -l 0 --dhchap-secret DHHC-1:00:MjBiNzUzNDk5NTJiMDI4YWY5YjAxNGE2M2JhOWY3MGM5NjdhNjYwN2QxZGEyYjVk1hdk9g==: --dhchap-ctrl-secret DHHC-1:03:MWUxNmY4Njk5NmQ3NzkwNWEyYTk0ZjM5Yzg0NDU1MDkzYTMyYTkzZGRhOGYzYThmZDk0NjM3YmI2MzU3ZGU2OKKEk9A=: 00:17:05.880 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.880 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.880 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:17:05.880 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.880 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.880 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.880 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:05.880 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:05.880 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:05.880 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:17:05.880 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:05.880 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:05.880 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:05.880 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:05.880 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.880 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.880 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.880 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.880 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.880 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.880 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.880 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:06.447 00:17:06.447 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:06.447 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:06.447 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.447 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.447 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.447 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.447 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.447 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.447 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:06.447 { 00:17:06.447 "cntlid": 107, 00:17:06.447 "qid": 0, 00:17:06.447 "state": "enabled", 00:17:06.447 "thread": "nvmf_tgt_poll_group_000", 00:17:06.447 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0", 00:17:06.447 "listen_address": { 00:17:06.447 "trtype": "TCP", 00:17:06.447 "adrfam": "IPv4", 00:17:06.447 "traddr": "10.0.0.3", 00:17:06.447 "trsvcid": "4420" 00:17:06.447 }, 00:17:06.447 "peer_address": { 00:17:06.447 "trtype": "TCP", 00:17:06.447 "adrfam": "IPv4", 00:17:06.447 "traddr": "10.0.0.1", 00:17:06.447 "trsvcid": "38246" 00:17:06.447 }, 00:17:06.447 "auth": { 00:17:06.447 "state": "completed", 00:17:06.447 "digest": "sha512", 00:17:06.447 "dhgroup": "ffdhe2048" 00:17:06.447 } 00:17:06.447 } 00:17:06.447 ]' 00:17:06.447 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:06.705 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:06.705 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:06.705 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:06.705 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:06.705 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.705 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.705 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.963 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTkwMmMwOGY3YTdkMGZjNWQ0ODJhMDk2ZGRmZjlmMjZ64U1I: --dhchap-ctrl-secret DHHC-1:02:MGE3OWQyMmNmNGIwMzA2NTgzYjhmMDM4NDRhZWZkMTBmM2YxYzdmZTBlNDRkYjExHkTEaw==: 00:17:06.963 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid 406d54d0-5e94-472a-a2b3-4291f3ac81e0 -l 0 --dhchap-secret DHHC-1:01:YTkwMmMwOGY3YTdkMGZjNWQ0ODJhMDk2ZGRmZjlmMjZ64U1I: --dhchap-ctrl-secret DHHC-1:02:MGE3OWQyMmNmNGIwMzA2NTgzYjhmMDM4NDRhZWZkMTBmM2YxYzdmZTBlNDRkYjExHkTEaw==: 00:17:07.529 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:07.529 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:07.529 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:17:07.529 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.529 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.529 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.529 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:07.529 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:07.529 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:07.788 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:17:07.788 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:07.788 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:07.788 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:07.788 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:07.788 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.788 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:07.788 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.788 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.788 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.788 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:07.788 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:07.788 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:08.046 00:17:08.046 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:08.046 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:08.046 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.305 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.305 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:08.305 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.305 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.305 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.305 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:08.305 { 00:17:08.305 "cntlid": 109, 00:17:08.305 "qid": 0, 00:17:08.305 "state": "enabled", 00:17:08.305 "thread": "nvmf_tgt_poll_group_000", 00:17:08.305 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0", 00:17:08.305 "listen_address": { 00:17:08.305 "trtype": "TCP", 00:17:08.305 "adrfam": "IPv4", 00:17:08.305 "traddr": "10.0.0.3", 00:17:08.305 "trsvcid": "4420" 00:17:08.305 }, 00:17:08.305 "peer_address": { 00:17:08.305 "trtype": "TCP", 00:17:08.305 "adrfam": "IPv4", 00:17:08.305 "traddr": "10.0.0.1", 00:17:08.305 "trsvcid": "38284" 00:17:08.305 }, 00:17:08.305 "auth": { 00:17:08.305 "state": "completed", 00:17:08.305 "digest": "sha512", 00:17:08.305 "dhgroup": "ffdhe2048" 00:17:08.305 } 00:17:08.305 } 00:17:08.305 ]' 00:17:08.305 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:08.305 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:08.305 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:08.305 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:08.305 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:08.305 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:08.305 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:08.305 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.564 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTVjZjM5ZDQ2MWRmOGUzODllYjM2MzVlNDMwOTc1NjA0NzQ0OWFmOWIyZmZlYzlmWliPlg==: --dhchap-ctrl-secret DHHC-1:01:ZWEwMjQzYmM0ODJlZDU5OTBhYjZkOWNmMjIyN2ZkZTGcv5Bp: 00:17:08.564 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid 406d54d0-5e94-472a-a2b3-4291f3ac81e0 -l 0 --dhchap-secret DHHC-1:02:NTVjZjM5ZDQ2MWRmOGUzODllYjM2MzVlNDMwOTc1NjA0NzQ0OWFmOWIyZmZlYzlmWliPlg==: --dhchap-ctrl-secret DHHC-1:01:ZWEwMjQzYmM0ODJlZDU5OTBhYjZkOWNmMjIyN2ZkZTGcv5Bp: 00:17:09.130 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.130 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.130 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:17:09.130 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.130 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.130 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.130 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:09.130 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:09.130 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:09.389 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:17:09.389 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:09.389 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:09.389 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:09.389 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:09.389 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:09.389 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --dhchap-key key3 00:17:09.389 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.389 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.389 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.389 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:09.389 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:09.389 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:09.647 00:17:09.647 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:09.647 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.647 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:09.905 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.905 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.905 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.905 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.905 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.905 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:09.905 { 00:17:09.905 "cntlid": 111, 00:17:09.905 "qid": 0, 00:17:09.905 "state": "enabled", 00:17:09.905 "thread": "nvmf_tgt_poll_group_000", 00:17:09.905 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0", 00:17:09.905 "listen_address": { 00:17:09.905 "trtype": "TCP", 00:17:09.905 "adrfam": "IPv4", 00:17:09.905 "traddr": "10.0.0.3", 00:17:09.905 "trsvcid": "4420" 00:17:09.905 }, 00:17:09.905 "peer_address": { 00:17:09.905 "trtype": "TCP", 00:17:09.905 "adrfam": "IPv4", 00:17:09.905 "traddr": "10.0.0.1", 00:17:09.905 "trsvcid": "38308" 00:17:09.905 }, 00:17:09.905 "auth": { 00:17:09.905 "state": "completed", 00:17:09.905 "digest": "sha512", 00:17:09.905 "dhgroup": "ffdhe2048" 00:17:09.905 } 00:17:09.905 } 00:17:09.905 ]' 00:17:09.905 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:09.905 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:09.906 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:10.164 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:10.164 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:10.164 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:10.164 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.164 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.422 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjYwYjE4ODZiZGUzYmE2ZWI3NGMxZDE3NjI3YWRhYjhkMDVlM2Q4ZDNjYTI1MWVhZjkwNzY0YWM3ZmMyZmQzZImPBGY=: 00:17:10.422 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid 406d54d0-5e94-472a-a2b3-4291f3ac81e0 -l 0 --dhchap-secret DHHC-1:03:YjYwYjE4ODZiZGUzYmE2ZWI3NGMxZDE3NjI3YWRhYjhkMDVlM2Q4ZDNjYTI1MWVhZjkwNzY0YWM3ZmMyZmQzZImPBGY=: 00:17:10.988 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:10.988 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:10.988 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:17:10.988 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.988 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.988 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.988 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:10.988 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:10.988 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:10.988 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:11.247 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:17:11.247 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:11.247 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:11.247 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:11.247 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:11.247 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:11.247 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:11.247 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.247 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.247 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.247 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:11.247 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:11.247 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:11.508 00:17:11.508 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:11.508 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:11.508 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.767 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.767 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:11.767 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.767 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.767 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.767 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:11.767 { 00:17:11.767 "cntlid": 113, 00:17:11.767 "qid": 0, 00:17:11.767 "state": "enabled", 00:17:11.767 "thread": "nvmf_tgt_poll_group_000", 00:17:11.767 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0", 00:17:11.767 "listen_address": { 00:17:11.767 "trtype": "TCP", 00:17:11.767 "adrfam": "IPv4", 00:17:11.767 "traddr": "10.0.0.3", 00:17:11.767 "trsvcid": "4420" 00:17:11.767 }, 00:17:11.767 "peer_address": { 00:17:11.767 "trtype": "TCP", 00:17:11.767 "adrfam": "IPv4", 00:17:11.767 "traddr": "10.0.0.1", 00:17:11.767 "trsvcid": "38334" 00:17:11.767 }, 00:17:11.767 "auth": { 00:17:11.767 "state": "completed", 00:17:11.767 "digest": "sha512", 00:17:11.767 "dhgroup": "ffdhe3072" 00:17:11.767 } 00:17:11.767 } 00:17:11.767 ]' 00:17:11.767 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:11.767 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:11.767 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:11.767 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:11.767 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:11.767 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:11.767 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.767 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.026 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjBiNzUzNDk5NTJiMDI4YWY5YjAxNGE2M2JhOWY3MGM5NjdhNjYwN2QxZGEyYjVk1hdk9g==: --dhchap-ctrl-secret DHHC-1:03:MWUxNmY4Njk5NmQ3NzkwNWEyYTk0ZjM5Yzg0NDU1MDkzYTMyYTkzZGRhOGYzYThmZDk0NjM3YmI2MzU3ZGU2OKKEk9A=: 00:17:12.026 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid 406d54d0-5e94-472a-a2b3-4291f3ac81e0 -l 0 --dhchap-secret DHHC-1:00:MjBiNzUzNDk5NTJiMDI4YWY5YjAxNGE2M2JhOWY3MGM5NjdhNjYwN2QxZGEyYjVk1hdk9g==: --dhchap-ctrl-secret DHHC-1:03:MWUxNmY4Njk5NmQ3NzkwNWEyYTk0ZjM5Yzg0NDU1MDkzYTMyYTkzZGRhOGYzYThmZDk0NjM3YmI2MzU3ZGU2OKKEk9A=: 00:17:12.594 14:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.594 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.594 14:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:17:12.594 14:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.594 14:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.594 14:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.594 14:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:12.594 14:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:12.594 14:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:12.852 14:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:17:12.852 14:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:12.852 14:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:12.852 14:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:12.852 14:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:12.852 14:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:12.852 14:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:12.852 14:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.852 14:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.852 14:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.852 14:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:12.852 14:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:12.852 14:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:13.111 00:17:13.111 14:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:13.111 14:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.111 14:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:13.370 14:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.370 14:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.371 14:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.371 14:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.371 14:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.371 14:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:13.371 { 00:17:13.371 "cntlid": 115, 00:17:13.371 "qid": 0, 00:17:13.371 "state": "enabled", 00:17:13.371 "thread": "nvmf_tgt_poll_group_000", 00:17:13.371 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0", 00:17:13.371 "listen_address": { 00:17:13.371 "trtype": "TCP", 00:17:13.371 "adrfam": "IPv4", 00:17:13.371 "traddr": "10.0.0.3", 00:17:13.371 "trsvcid": "4420" 00:17:13.371 }, 00:17:13.371 "peer_address": { 00:17:13.371 "trtype": "TCP", 00:17:13.371 "adrfam": "IPv4", 00:17:13.371 "traddr": "10.0.0.1", 00:17:13.371 "trsvcid": "38346" 00:17:13.371 }, 00:17:13.371 "auth": { 00:17:13.371 "state": "completed", 00:17:13.371 "digest": "sha512", 00:17:13.371 "dhgroup": "ffdhe3072" 00:17:13.371 } 00:17:13.371 } 00:17:13.371 ]' 00:17:13.371 14:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:13.371 14:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:13.371 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:13.629 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:13.629 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:13.629 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.629 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.629 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.887 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTkwMmMwOGY3YTdkMGZjNWQ0ODJhMDk2ZGRmZjlmMjZ64U1I: --dhchap-ctrl-secret DHHC-1:02:MGE3OWQyMmNmNGIwMzA2NTgzYjhmMDM4NDRhZWZkMTBmM2YxYzdmZTBlNDRkYjExHkTEaw==: 00:17:13.887 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid 406d54d0-5e94-472a-a2b3-4291f3ac81e0 -l 0 --dhchap-secret DHHC-1:01:YTkwMmMwOGY3YTdkMGZjNWQ0ODJhMDk2ZGRmZjlmMjZ64U1I: --dhchap-ctrl-secret DHHC-1:02:MGE3OWQyMmNmNGIwMzA2NTgzYjhmMDM4NDRhZWZkMTBmM2YxYzdmZTBlNDRkYjExHkTEaw==: 00:17:14.453 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:14.453 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:14.453 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:17:14.453 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.453 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.453 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.453 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:14.453 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:14.453 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:14.712 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:17:14.712 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:14.712 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:14.712 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:14.712 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:14.712 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:14.712 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:14.712 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.712 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.712 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.712 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:14.712 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:14.712 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:14.970 00:17:14.970 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:14.970 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.970 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:15.228 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.228 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:15.228 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.228 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.228 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.228 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:15.228 { 00:17:15.228 "cntlid": 117, 00:17:15.228 "qid": 0, 00:17:15.228 "state": "enabled", 00:17:15.228 "thread": "nvmf_tgt_poll_group_000", 00:17:15.228 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0", 00:17:15.228 "listen_address": { 00:17:15.228 "trtype": "TCP", 00:17:15.228 "adrfam": "IPv4", 00:17:15.228 "traddr": "10.0.0.3", 00:17:15.228 "trsvcid": "4420" 00:17:15.228 }, 00:17:15.228 "peer_address": { 00:17:15.228 "trtype": "TCP", 00:17:15.228 "adrfam": "IPv4", 00:17:15.228 "traddr": "10.0.0.1", 00:17:15.228 "trsvcid": "56542" 00:17:15.228 }, 00:17:15.228 "auth": { 00:17:15.228 "state": "completed", 00:17:15.228 "digest": "sha512", 00:17:15.228 "dhgroup": "ffdhe3072" 00:17:15.228 } 00:17:15.228 } 00:17:15.228 ]' 00:17:15.228 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:15.228 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:15.228 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:15.228 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:15.228 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:15.228 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:15.228 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.228 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.487 14:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTVjZjM5ZDQ2MWRmOGUzODllYjM2MzVlNDMwOTc1NjA0NzQ0OWFmOWIyZmZlYzlmWliPlg==: --dhchap-ctrl-secret DHHC-1:01:ZWEwMjQzYmM0ODJlZDU5OTBhYjZkOWNmMjIyN2ZkZTGcv5Bp: 00:17:15.487 14:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid 406d54d0-5e94-472a-a2b3-4291f3ac81e0 -l 0 --dhchap-secret DHHC-1:02:NTVjZjM5ZDQ2MWRmOGUzODllYjM2MzVlNDMwOTc1NjA0NzQ0OWFmOWIyZmZlYzlmWliPlg==: --dhchap-ctrl-secret DHHC-1:01:ZWEwMjQzYmM0ODJlZDU5OTBhYjZkOWNmMjIyN2ZkZTGcv5Bp: 00:17:16.053 14:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:16.053 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:16.053 14:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:17:16.053 14:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.053 14:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.313 14:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.313 14:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:16.313 14:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:16.313 14:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:16.313 14:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:17:16.313 14:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:16.313 14:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:16.313 14:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:16.313 14:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:16.313 14:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:16.314 14:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --dhchap-key key3 00:17:16.314 14:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.314 14:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.314 14:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.314 14:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:16.314 14:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:16.314 14:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:16.610 00:17:16.610 14:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:16.610 14:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.610 14:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:16.869 14:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.869 14:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.869 14:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.869 14:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.869 14:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.869 14:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:16.869 { 00:17:16.869 "cntlid": 119, 00:17:16.869 "qid": 0, 00:17:16.869 "state": "enabled", 00:17:16.869 "thread": "nvmf_tgt_poll_group_000", 00:17:16.869 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0", 00:17:16.869 "listen_address": { 00:17:16.869 "trtype": "TCP", 00:17:16.869 "adrfam": "IPv4", 00:17:16.869 "traddr": "10.0.0.3", 00:17:16.869 "trsvcid": "4420" 00:17:16.869 }, 00:17:16.869 "peer_address": { 00:17:16.869 "trtype": "TCP", 00:17:16.869 "adrfam": "IPv4", 00:17:16.869 "traddr": "10.0.0.1", 00:17:16.869 "trsvcid": "56560" 00:17:16.869 }, 00:17:16.869 "auth": { 00:17:16.869 "state": "completed", 00:17:16.869 "digest": "sha512", 00:17:16.869 "dhgroup": "ffdhe3072" 00:17:16.869 } 00:17:16.869 } 00:17:16.869 ]' 00:17:16.869 14:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:17.127 14:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:17.127 14:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:17.127 14:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:17.127 14:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:17.127 14:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:17.127 14:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:17.127 14:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.385 14:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjYwYjE4ODZiZGUzYmE2ZWI3NGMxZDE3NjI3YWRhYjhkMDVlM2Q4ZDNjYTI1MWVhZjkwNzY0YWM3ZmMyZmQzZImPBGY=: 00:17:17.385 14:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid 406d54d0-5e94-472a-a2b3-4291f3ac81e0 -l 0 --dhchap-secret DHHC-1:03:YjYwYjE4ODZiZGUzYmE2ZWI3NGMxZDE3NjI3YWRhYjhkMDVlM2Q4ZDNjYTI1MWVhZjkwNzY0YWM3ZmMyZmQzZImPBGY=: 00:17:17.951 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.951 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.951 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:17:17.951 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.951 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.951 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.951 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:17.951 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:17.951 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:17.951 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:18.210 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:17:18.210 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:18.210 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:18.210 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:18.210 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:18.210 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:18.210 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.210 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.210 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.210 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.210 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.210 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.210 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.468 00:17:18.468 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:18.468 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:18.468 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.728 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.728 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.728 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.728 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.728 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.728 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:18.728 { 00:17:18.728 "cntlid": 121, 00:17:18.728 "qid": 0, 00:17:18.728 "state": "enabled", 00:17:18.728 "thread": "nvmf_tgt_poll_group_000", 00:17:18.728 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0", 00:17:18.728 "listen_address": { 00:17:18.728 "trtype": "TCP", 00:17:18.728 "adrfam": "IPv4", 00:17:18.728 "traddr": "10.0.0.3", 00:17:18.728 "trsvcid": "4420" 00:17:18.728 }, 00:17:18.728 "peer_address": { 00:17:18.728 "trtype": "TCP", 00:17:18.728 "adrfam": "IPv4", 00:17:18.728 "traddr": "10.0.0.1", 00:17:18.728 "trsvcid": "56594" 00:17:18.728 }, 00:17:18.728 "auth": { 00:17:18.728 "state": "completed", 00:17:18.728 "digest": "sha512", 00:17:18.728 "dhgroup": "ffdhe4096" 00:17:18.728 } 00:17:18.728 } 00:17:18.728 ]' 00:17:18.728 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:18.728 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:18.728 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:18.728 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:18.728 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:18.728 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.728 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.728 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.987 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjBiNzUzNDk5NTJiMDI4YWY5YjAxNGE2M2JhOWY3MGM5NjdhNjYwN2QxZGEyYjVk1hdk9g==: --dhchap-ctrl-secret DHHC-1:03:MWUxNmY4Njk5NmQ3NzkwNWEyYTk0ZjM5Yzg0NDU1MDkzYTMyYTkzZGRhOGYzYThmZDk0NjM3YmI2MzU3ZGU2OKKEk9A=: 00:17:18.988 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid 406d54d0-5e94-472a-a2b3-4291f3ac81e0 -l 0 --dhchap-secret DHHC-1:00:MjBiNzUzNDk5NTJiMDI4YWY5YjAxNGE2M2JhOWY3MGM5NjdhNjYwN2QxZGEyYjVk1hdk9g==: --dhchap-ctrl-secret DHHC-1:03:MWUxNmY4Njk5NmQ3NzkwNWEyYTk0ZjM5Yzg0NDU1MDkzYTMyYTkzZGRhOGYzYThmZDk0NjM3YmI2MzU3ZGU2OKKEk9A=: 00:17:19.555 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.555 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.555 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:17:19.555 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.555 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.555 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.555 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:19.555 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:19.555 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:20.124 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:17:20.124 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:20.124 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:20.124 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:20.124 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:20.124 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.124 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.124 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.124 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.124 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.124 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.124 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.124 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.387 00:17:20.387 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:20.387 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:20.387 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.647 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.647 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.647 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.647 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.647 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.647 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:20.647 { 00:17:20.647 "cntlid": 123, 00:17:20.647 "qid": 0, 00:17:20.647 "state": "enabled", 00:17:20.647 "thread": "nvmf_tgt_poll_group_000", 00:17:20.647 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0", 00:17:20.647 "listen_address": { 00:17:20.647 "trtype": "TCP", 00:17:20.647 "adrfam": "IPv4", 00:17:20.647 "traddr": "10.0.0.3", 00:17:20.647 "trsvcid": "4420" 00:17:20.647 }, 00:17:20.647 "peer_address": { 00:17:20.647 "trtype": "TCP", 00:17:20.647 "adrfam": "IPv4", 00:17:20.647 "traddr": "10.0.0.1", 00:17:20.647 "trsvcid": "56620" 00:17:20.647 }, 00:17:20.647 "auth": { 00:17:20.647 "state": "completed", 00:17:20.647 "digest": "sha512", 00:17:20.647 "dhgroup": "ffdhe4096" 00:17:20.647 } 00:17:20.647 } 00:17:20.647 ]' 00:17:20.647 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:20.647 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:20.647 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:20.647 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:20.647 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:20.647 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.647 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.647 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.905 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTkwMmMwOGY3YTdkMGZjNWQ0ODJhMDk2ZGRmZjlmMjZ64U1I: --dhchap-ctrl-secret DHHC-1:02:MGE3OWQyMmNmNGIwMzA2NTgzYjhmMDM4NDRhZWZkMTBmM2YxYzdmZTBlNDRkYjExHkTEaw==: 00:17:20.905 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid 406d54d0-5e94-472a-a2b3-4291f3ac81e0 -l 0 --dhchap-secret DHHC-1:01:YTkwMmMwOGY3YTdkMGZjNWQ0ODJhMDk2ZGRmZjlmMjZ64U1I: --dhchap-ctrl-secret DHHC-1:02:MGE3OWQyMmNmNGIwMzA2NTgzYjhmMDM4NDRhZWZkMTBmM2YxYzdmZTBlNDRkYjExHkTEaw==: 00:17:21.474 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.474 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.474 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:17:21.474 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.474 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.474 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.474 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:21.474 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:21.474 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:21.733 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:17:21.733 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:21.733 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:21.733 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:21.733 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:21.733 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:21.733 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.733 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.733 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.733 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.733 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.733 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.733 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.991 00:17:22.250 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:22.250 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:22.250 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.250 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.250 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.250 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.250 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.510 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.510 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:22.510 { 00:17:22.510 "cntlid": 125, 00:17:22.510 "qid": 0, 00:17:22.510 "state": "enabled", 00:17:22.510 "thread": "nvmf_tgt_poll_group_000", 00:17:22.510 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0", 00:17:22.510 "listen_address": { 00:17:22.510 "trtype": "TCP", 00:17:22.510 "adrfam": "IPv4", 00:17:22.510 "traddr": "10.0.0.3", 00:17:22.510 "trsvcid": "4420" 00:17:22.510 }, 00:17:22.510 "peer_address": { 00:17:22.510 "trtype": "TCP", 00:17:22.510 "adrfam": "IPv4", 00:17:22.510 "traddr": "10.0.0.1", 00:17:22.510 "trsvcid": "56658" 00:17:22.510 }, 00:17:22.510 "auth": { 00:17:22.510 "state": "completed", 00:17:22.510 "digest": "sha512", 00:17:22.510 "dhgroup": "ffdhe4096" 00:17:22.510 } 00:17:22.510 } 00:17:22.510 ]' 00:17:22.510 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:22.510 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:22.510 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:22.510 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:22.510 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:22.510 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.510 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.510 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.770 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTVjZjM5ZDQ2MWRmOGUzODllYjM2MzVlNDMwOTc1NjA0NzQ0OWFmOWIyZmZlYzlmWliPlg==: --dhchap-ctrl-secret DHHC-1:01:ZWEwMjQzYmM0ODJlZDU5OTBhYjZkOWNmMjIyN2ZkZTGcv5Bp: 00:17:22.770 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid 406d54d0-5e94-472a-a2b3-4291f3ac81e0 -l 0 --dhchap-secret DHHC-1:02:NTVjZjM5ZDQ2MWRmOGUzODllYjM2MzVlNDMwOTc1NjA0NzQ0OWFmOWIyZmZlYzlmWliPlg==: --dhchap-ctrl-secret DHHC-1:01:ZWEwMjQzYmM0ODJlZDU5OTBhYjZkOWNmMjIyN2ZkZTGcv5Bp: 00:17:23.337 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.337 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.337 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:17:23.337 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.337 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.337 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.337 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:23.337 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:23.337 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:23.596 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:17:23.596 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:23.596 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:23.596 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:23.596 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:23.596 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:23.596 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --dhchap-key key3 00:17:23.596 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.596 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.596 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.596 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:23.597 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:23.597 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:23.858 00:17:23.858 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:23.858 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:23.858 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.118 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.118 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.118 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.118 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.118 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.118 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:24.118 { 00:17:24.118 "cntlid": 127, 00:17:24.118 "qid": 0, 00:17:24.118 "state": "enabled", 00:17:24.118 "thread": "nvmf_tgt_poll_group_000", 00:17:24.118 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0", 00:17:24.118 "listen_address": { 00:17:24.118 "trtype": "TCP", 00:17:24.118 "adrfam": "IPv4", 00:17:24.118 "traddr": "10.0.0.3", 00:17:24.118 "trsvcid": "4420" 00:17:24.118 }, 00:17:24.118 "peer_address": { 00:17:24.118 "trtype": "TCP", 00:17:24.118 "adrfam": "IPv4", 00:17:24.118 "traddr": "10.0.0.1", 00:17:24.118 "trsvcid": "58594" 00:17:24.118 }, 00:17:24.118 "auth": { 00:17:24.118 "state": "completed", 00:17:24.118 "digest": "sha512", 00:17:24.118 "dhgroup": "ffdhe4096" 00:17:24.118 } 00:17:24.118 } 00:17:24.118 ]' 00:17:24.118 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:24.375 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:24.375 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:24.375 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:24.375 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:24.375 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.375 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.375 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.633 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjYwYjE4ODZiZGUzYmE2ZWI3NGMxZDE3NjI3YWRhYjhkMDVlM2Q4ZDNjYTI1MWVhZjkwNzY0YWM3ZmMyZmQzZImPBGY=: 00:17:24.633 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid 406d54d0-5e94-472a-a2b3-4291f3ac81e0 -l 0 --dhchap-secret DHHC-1:03:YjYwYjE4ODZiZGUzYmE2ZWI3NGMxZDE3NjI3YWRhYjhkMDVlM2Q4ZDNjYTI1MWVhZjkwNzY0YWM3ZmMyZmQzZImPBGY=: 00:17:25.201 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.201 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.201 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:17:25.201 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.201 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.201 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.201 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:25.201 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:25.201 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:25.201 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:25.460 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:17:25.460 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:25.460 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:25.460 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:25.460 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:25.460 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.460 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.460 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.460 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.460 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.460 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.460 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.460 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.719 00:17:25.719 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:25.719 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.719 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:25.978 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.978 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:25.978 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.978 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.978 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.978 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:25.978 { 00:17:25.978 "cntlid": 129, 00:17:25.978 "qid": 0, 00:17:25.978 "state": "enabled", 00:17:25.978 "thread": "nvmf_tgt_poll_group_000", 00:17:25.978 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0", 00:17:25.978 "listen_address": { 00:17:25.978 "trtype": "TCP", 00:17:25.978 "adrfam": "IPv4", 00:17:25.978 "traddr": "10.0.0.3", 00:17:25.978 "trsvcid": "4420" 00:17:25.978 }, 00:17:25.978 "peer_address": { 00:17:25.978 "trtype": "TCP", 00:17:25.978 "adrfam": "IPv4", 00:17:25.978 "traddr": "10.0.0.1", 00:17:25.978 "trsvcid": "58626" 00:17:25.978 }, 00:17:25.978 "auth": { 00:17:25.978 "state": "completed", 00:17:25.978 "digest": "sha512", 00:17:25.978 "dhgroup": "ffdhe6144" 00:17:25.978 } 00:17:25.978 } 00:17:25.978 ]' 00:17:25.978 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:25.978 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:26.236 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:26.236 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:26.236 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:26.236 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.236 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.236 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.494 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjBiNzUzNDk5NTJiMDI4YWY5YjAxNGE2M2JhOWY3MGM5NjdhNjYwN2QxZGEyYjVk1hdk9g==: --dhchap-ctrl-secret DHHC-1:03:MWUxNmY4Njk5NmQ3NzkwNWEyYTk0ZjM5Yzg0NDU1MDkzYTMyYTkzZGRhOGYzYThmZDk0NjM3YmI2MzU3ZGU2OKKEk9A=: 00:17:26.495 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid 406d54d0-5e94-472a-a2b3-4291f3ac81e0 -l 0 --dhchap-secret DHHC-1:00:MjBiNzUzNDk5NTJiMDI4YWY5YjAxNGE2M2JhOWY3MGM5NjdhNjYwN2QxZGEyYjVk1hdk9g==: --dhchap-ctrl-secret DHHC-1:03:MWUxNmY4Njk5NmQ3NzkwNWEyYTk0ZjM5Yzg0NDU1MDkzYTMyYTkzZGRhOGYzYThmZDk0NjM3YmI2MzU3ZGU2OKKEk9A=: 00:17:27.063 14:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.063 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.063 14:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:17:27.063 14:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.063 14:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.063 14:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.063 14:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:27.063 14:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:27.063 14:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:27.323 14:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:17:27.323 14:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:27.323 14:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:27.323 14:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:27.323 14:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:27.323 14:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:27.323 14:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.323 14:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.323 14:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.323 14:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.323 14:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.323 14:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.323 14:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.891 00:17:27.891 14:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:27.891 14:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.891 14:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:27.891 14:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.891 14:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:27.891 14:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.891 14:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.891 14:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.891 14:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:27.891 { 00:17:27.891 "cntlid": 131, 00:17:27.891 "qid": 0, 00:17:27.891 "state": "enabled", 00:17:27.891 "thread": "nvmf_tgt_poll_group_000", 00:17:27.891 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0", 00:17:27.891 "listen_address": { 00:17:27.891 "trtype": "TCP", 00:17:27.891 "adrfam": "IPv4", 00:17:27.891 "traddr": "10.0.0.3", 00:17:27.891 "trsvcid": "4420" 00:17:27.891 }, 00:17:27.891 "peer_address": { 00:17:27.891 "trtype": "TCP", 00:17:27.891 "adrfam": "IPv4", 00:17:27.891 "traddr": "10.0.0.1", 00:17:27.891 "trsvcid": "58654" 00:17:27.891 }, 00:17:27.891 "auth": { 00:17:27.891 "state": "completed", 00:17:27.891 "digest": "sha512", 00:17:27.891 "dhgroup": "ffdhe6144" 00:17:27.891 } 00:17:27.891 } 00:17:27.891 ]' 00:17:27.891 14:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:28.150 14:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:28.150 14:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:28.150 14:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:28.150 14:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:28.150 14:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.150 14:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.150 14:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.410 14:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTkwMmMwOGY3YTdkMGZjNWQ0ODJhMDk2ZGRmZjlmMjZ64U1I: --dhchap-ctrl-secret DHHC-1:02:MGE3OWQyMmNmNGIwMzA2NTgzYjhmMDM4NDRhZWZkMTBmM2YxYzdmZTBlNDRkYjExHkTEaw==: 00:17:28.410 14:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid 406d54d0-5e94-472a-a2b3-4291f3ac81e0 -l 0 --dhchap-secret DHHC-1:01:YTkwMmMwOGY3YTdkMGZjNWQ0ODJhMDk2ZGRmZjlmMjZ64U1I: --dhchap-ctrl-secret DHHC-1:02:MGE3OWQyMmNmNGIwMzA2NTgzYjhmMDM4NDRhZWZkMTBmM2YxYzdmZTBlNDRkYjExHkTEaw==: 00:17:28.978 14:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:28.978 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:28.978 14:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:17:28.978 14:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.978 14:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.978 14:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.978 14:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:28.978 14:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:28.978 14:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:29.236 14:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:17:29.236 14:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:29.236 14:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:29.236 14:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:29.236 14:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:29.236 14:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.236 14:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:29.236 14:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.236 14:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.236 14:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.236 14:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:29.237 14:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:29.237 14:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:29.804 00:17:29.804 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:29.804 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:29.804 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:29.804 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.804 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:29.804 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.804 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.064 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.064 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:30.064 { 00:17:30.064 "cntlid": 133, 00:17:30.064 "qid": 0, 00:17:30.064 "state": "enabled", 00:17:30.064 "thread": "nvmf_tgt_poll_group_000", 00:17:30.064 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0", 00:17:30.064 "listen_address": { 00:17:30.064 "trtype": "TCP", 00:17:30.064 "adrfam": "IPv4", 00:17:30.064 "traddr": "10.0.0.3", 00:17:30.064 "trsvcid": "4420" 00:17:30.064 }, 00:17:30.064 "peer_address": { 00:17:30.064 "trtype": "TCP", 00:17:30.064 "adrfam": "IPv4", 00:17:30.064 "traddr": "10.0.0.1", 00:17:30.064 "trsvcid": "58676" 00:17:30.064 }, 00:17:30.064 "auth": { 00:17:30.064 "state": "completed", 00:17:30.064 "digest": "sha512", 00:17:30.064 "dhgroup": "ffdhe6144" 00:17:30.064 } 00:17:30.064 } 00:17:30.064 ]' 00:17:30.064 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:30.064 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:30.064 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:30.064 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:30.064 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:30.064 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.064 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.064 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.348 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTVjZjM5ZDQ2MWRmOGUzODllYjM2MzVlNDMwOTc1NjA0NzQ0OWFmOWIyZmZlYzlmWliPlg==: --dhchap-ctrl-secret DHHC-1:01:ZWEwMjQzYmM0ODJlZDU5OTBhYjZkOWNmMjIyN2ZkZTGcv5Bp: 00:17:30.348 14:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid 406d54d0-5e94-472a-a2b3-4291f3ac81e0 -l 0 --dhchap-secret DHHC-1:02:NTVjZjM5ZDQ2MWRmOGUzODllYjM2MzVlNDMwOTc1NjA0NzQ0OWFmOWIyZmZlYzlmWliPlg==: --dhchap-ctrl-secret DHHC-1:01:ZWEwMjQzYmM0ODJlZDU5OTBhYjZkOWNmMjIyN2ZkZTGcv5Bp: 00:17:30.916 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:30.916 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:30.916 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:17:30.916 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.916 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.916 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.916 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:30.916 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:30.916 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:31.175 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:17:31.175 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:31.175 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:31.175 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:31.175 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:31.175 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.175 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --dhchap-key key3 00:17:31.175 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.175 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.175 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.175 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:31.175 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:31.175 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:31.434 00:17:31.434 14:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:31.434 14:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:31.434 14:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.693 14:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.693 14:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:31.693 14:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.693 14:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.693 14:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.693 14:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:31.693 { 00:17:31.693 "cntlid": 135, 00:17:31.693 "qid": 0, 00:17:31.693 "state": "enabled", 00:17:31.693 "thread": "nvmf_tgt_poll_group_000", 00:17:31.693 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0", 00:17:31.693 "listen_address": { 00:17:31.693 "trtype": "TCP", 00:17:31.693 "adrfam": "IPv4", 00:17:31.693 "traddr": "10.0.0.3", 00:17:31.693 "trsvcid": "4420" 00:17:31.693 }, 00:17:31.693 "peer_address": { 00:17:31.693 "trtype": "TCP", 00:17:31.693 "adrfam": "IPv4", 00:17:31.693 "traddr": "10.0.0.1", 00:17:31.693 "trsvcid": "58686" 00:17:31.693 }, 00:17:31.693 "auth": { 00:17:31.693 "state": "completed", 00:17:31.693 "digest": "sha512", 00:17:31.693 "dhgroup": "ffdhe6144" 00:17:31.693 } 00:17:31.693 } 00:17:31.693 ]' 00:17:31.693 14:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:31.693 14:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:31.693 14:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:31.952 14:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:31.952 14:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:31.952 14:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:31.952 14:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:31.952 14:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.212 14:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjYwYjE4ODZiZGUzYmE2ZWI3NGMxZDE3NjI3YWRhYjhkMDVlM2Q4ZDNjYTI1MWVhZjkwNzY0YWM3ZmMyZmQzZImPBGY=: 00:17:32.212 14:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid 406d54d0-5e94-472a-a2b3-4291f3ac81e0 -l 0 --dhchap-secret DHHC-1:03:YjYwYjE4ODZiZGUzYmE2ZWI3NGMxZDE3NjI3YWRhYjhkMDVlM2Q4ZDNjYTI1MWVhZjkwNzY0YWM3ZmMyZmQzZImPBGY=: 00:17:32.780 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.780 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.780 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:17:32.780 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.780 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.780 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.780 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:32.780 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:32.780 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:32.780 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:33.039 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:17:33.039 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:33.040 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:33.040 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:33.040 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:33.040 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:33.040 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:33.040 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.040 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.040 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.040 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:33.040 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:33.040 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:33.646 00:17:33.647 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:33.647 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:33.647 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.905 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.905 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:33.905 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.905 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.905 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.905 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:33.905 { 00:17:33.905 "cntlid": 137, 00:17:33.905 "qid": 0, 00:17:33.905 "state": "enabled", 00:17:33.905 "thread": "nvmf_tgt_poll_group_000", 00:17:33.905 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0", 00:17:33.905 "listen_address": { 00:17:33.905 "trtype": "TCP", 00:17:33.905 "adrfam": "IPv4", 00:17:33.905 "traddr": "10.0.0.3", 00:17:33.905 "trsvcid": "4420" 00:17:33.905 }, 00:17:33.905 "peer_address": { 00:17:33.905 "trtype": "TCP", 00:17:33.905 "adrfam": "IPv4", 00:17:33.905 "traddr": "10.0.0.1", 00:17:33.905 "trsvcid": "37982" 00:17:33.905 }, 00:17:33.905 "auth": { 00:17:33.905 "state": "completed", 00:17:33.905 "digest": "sha512", 00:17:33.905 "dhgroup": "ffdhe8192" 00:17:33.905 } 00:17:33.905 } 00:17:33.905 ]' 00:17:33.905 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:33.905 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:33.905 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:33.905 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:33.905 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:33.905 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:33.905 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:33.905 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.164 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjBiNzUzNDk5NTJiMDI4YWY5YjAxNGE2M2JhOWY3MGM5NjdhNjYwN2QxZGEyYjVk1hdk9g==: --dhchap-ctrl-secret DHHC-1:03:MWUxNmY4Njk5NmQ3NzkwNWEyYTk0ZjM5Yzg0NDU1MDkzYTMyYTkzZGRhOGYzYThmZDk0NjM3YmI2MzU3ZGU2OKKEk9A=: 00:17:34.164 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid 406d54d0-5e94-472a-a2b3-4291f3ac81e0 -l 0 --dhchap-secret DHHC-1:00:MjBiNzUzNDk5NTJiMDI4YWY5YjAxNGE2M2JhOWY3MGM5NjdhNjYwN2QxZGEyYjVk1hdk9g==: --dhchap-ctrl-secret DHHC-1:03:MWUxNmY4Njk5NmQ3NzkwNWEyYTk0ZjM5Yzg0NDU1MDkzYTMyYTkzZGRhOGYzYThmZDk0NjM3YmI2MzU3ZGU2OKKEk9A=: 00:17:34.732 14:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:34.732 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:34.732 14:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:17:34.732 14:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.732 14:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.732 14:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.732 14:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:34.732 14:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:34.732 14:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:34.991 14:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:17:34.991 14:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:34.991 14:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:34.991 14:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:34.991 14:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:34.991 14:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:34.991 14:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:34.991 14:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.991 14:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.991 14:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.991 14:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:34.991 14:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:34.991 14:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:35.559 00:17:35.559 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:35.559 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:35.559 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.818 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.818 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.818 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.818 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.818 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.818 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:35.818 { 00:17:35.818 "cntlid": 139, 00:17:35.818 "qid": 0, 00:17:35.818 "state": "enabled", 00:17:35.818 "thread": "nvmf_tgt_poll_group_000", 00:17:35.818 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0", 00:17:35.818 "listen_address": { 00:17:35.818 "trtype": "TCP", 00:17:35.818 "adrfam": "IPv4", 00:17:35.818 "traddr": "10.0.0.3", 00:17:35.818 "trsvcid": "4420" 00:17:35.818 }, 00:17:35.818 "peer_address": { 00:17:35.818 "trtype": "TCP", 00:17:35.818 "adrfam": "IPv4", 00:17:35.818 "traddr": "10.0.0.1", 00:17:35.818 "trsvcid": "38002" 00:17:35.818 }, 00:17:35.818 "auth": { 00:17:35.818 "state": "completed", 00:17:35.818 "digest": "sha512", 00:17:35.818 "dhgroup": "ffdhe8192" 00:17:35.818 } 00:17:35.818 } 00:17:35.818 ]' 00:17:35.818 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:35.818 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:35.818 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:35.818 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:35.818 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:35.818 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:35.818 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:35.818 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.077 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTkwMmMwOGY3YTdkMGZjNWQ0ODJhMDk2ZGRmZjlmMjZ64U1I: --dhchap-ctrl-secret DHHC-1:02:MGE3OWQyMmNmNGIwMzA2NTgzYjhmMDM4NDRhZWZkMTBmM2YxYzdmZTBlNDRkYjExHkTEaw==: 00:17:36.077 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid 406d54d0-5e94-472a-a2b3-4291f3ac81e0 -l 0 --dhchap-secret DHHC-1:01:YTkwMmMwOGY3YTdkMGZjNWQ0ODJhMDk2ZGRmZjlmMjZ64U1I: --dhchap-ctrl-secret DHHC-1:02:MGE3OWQyMmNmNGIwMzA2NTgzYjhmMDM4NDRhZWZkMTBmM2YxYzdmZTBlNDRkYjExHkTEaw==: 00:17:36.648 14:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:36.648 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:36.648 14:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:17:36.648 14:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.648 14:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.648 14:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.648 14:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:36.648 14:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:36.648 14:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:36.907 14:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:17:36.907 14:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:36.907 14:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:36.907 14:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:36.907 14:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:36.907 14:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:36.907 14:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:36.907 14:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.907 14:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.907 14:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.907 14:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:36.907 14:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:36.907 14:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:37.482 00:17:37.743 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:37.743 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.743 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:38.002 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.002 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:38.002 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.002 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.002 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.002 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:38.002 { 00:17:38.002 "cntlid": 141, 00:17:38.002 "qid": 0, 00:17:38.002 "state": "enabled", 00:17:38.002 "thread": "nvmf_tgt_poll_group_000", 00:17:38.002 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0", 00:17:38.002 "listen_address": { 00:17:38.002 "trtype": "TCP", 00:17:38.002 "adrfam": "IPv4", 00:17:38.002 "traddr": "10.0.0.3", 00:17:38.002 "trsvcid": "4420" 00:17:38.002 }, 00:17:38.002 "peer_address": { 00:17:38.002 "trtype": "TCP", 00:17:38.002 "adrfam": "IPv4", 00:17:38.002 "traddr": "10.0.0.1", 00:17:38.002 "trsvcid": "38022" 00:17:38.002 }, 00:17:38.002 "auth": { 00:17:38.002 "state": "completed", 00:17:38.002 "digest": "sha512", 00:17:38.002 "dhgroup": "ffdhe8192" 00:17:38.002 } 00:17:38.002 } 00:17:38.002 ]' 00:17:38.002 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:38.002 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:38.002 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:38.002 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:38.002 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:38.002 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.002 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.002 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.262 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTVjZjM5ZDQ2MWRmOGUzODllYjM2MzVlNDMwOTc1NjA0NzQ0OWFmOWIyZmZlYzlmWliPlg==: --dhchap-ctrl-secret DHHC-1:01:ZWEwMjQzYmM0ODJlZDU5OTBhYjZkOWNmMjIyN2ZkZTGcv5Bp: 00:17:38.262 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid 406d54d0-5e94-472a-a2b3-4291f3ac81e0 -l 0 --dhchap-secret DHHC-1:02:NTVjZjM5ZDQ2MWRmOGUzODllYjM2MzVlNDMwOTc1NjA0NzQ0OWFmOWIyZmZlYzlmWliPlg==: --dhchap-ctrl-secret DHHC-1:01:ZWEwMjQzYmM0ODJlZDU5OTBhYjZkOWNmMjIyN2ZkZTGcv5Bp: 00:17:38.830 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:38.830 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:38.830 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:17:38.830 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.830 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.830 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.830 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:38.830 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:38.830 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:39.089 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:17:39.089 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:39.089 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:39.089 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:39.089 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:39.089 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.089 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --dhchap-key key3 00:17:39.089 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.089 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.089 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.089 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:39.089 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:39.089 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:39.657 00:17:39.657 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:39.657 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.657 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:39.916 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.916 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.916 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.916 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.916 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.916 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:39.916 { 00:17:39.916 "cntlid": 143, 00:17:39.916 "qid": 0, 00:17:39.916 "state": "enabled", 00:17:39.916 "thread": "nvmf_tgt_poll_group_000", 00:17:39.916 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0", 00:17:39.916 "listen_address": { 00:17:39.916 "trtype": "TCP", 00:17:39.916 "adrfam": "IPv4", 00:17:39.916 "traddr": "10.0.0.3", 00:17:39.916 "trsvcid": "4420" 00:17:39.916 }, 00:17:39.916 "peer_address": { 00:17:39.916 "trtype": "TCP", 00:17:39.916 "adrfam": "IPv4", 00:17:39.916 "traddr": "10.0.0.1", 00:17:39.916 "trsvcid": "38056" 00:17:39.916 }, 00:17:39.917 "auth": { 00:17:39.917 "state": "completed", 00:17:39.917 "digest": "sha512", 00:17:39.917 "dhgroup": "ffdhe8192" 00:17:39.917 } 00:17:39.917 } 00:17:39.917 ]' 00:17:39.917 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:39.917 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:39.917 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:39.917 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:39.917 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:39.917 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:39.917 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.917 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.175 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjYwYjE4ODZiZGUzYmE2ZWI3NGMxZDE3NjI3YWRhYjhkMDVlM2Q4ZDNjYTI1MWVhZjkwNzY0YWM3ZmMyZmQzZImPBGY=: 00:17:40.175 14:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid 406d54d0-5e94-472a-a2b3-4291f3ac81e0 -l 0 --dhchap-secret DHHC-1:03:YjYwYjE4ODZiZGUzYmE2ZWI3NGMxZDE3NjI3YWRhYjhkMDVlM2Q4ZDNjYTI1MWVhZjkwNzY0YWM3ZmMyZmQzZImPBGY=: 00:17:40.743 14:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.743 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.743 14:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:17:40.743 14:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.743 14:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.743 14:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.743 14:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:17:40.743 14:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:17:40.743 14:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:17:40.743 14:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:40.743 14:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:40.743 14:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:41.002 14:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:17:41.002 14:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:41.002 14:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:41.002 14:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:41.002 14:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:41.002 14:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:41.002 14:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:41.002 14:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.002 14:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.002 14:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.002 14:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:41.261 14:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:41.261 14:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:41.521 00:17:41.780 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:41.780 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.780 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:41.780 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.780 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.780 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.780 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.039 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.039 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:42.039 { 00:17:42.039 "cntlid": 145, 00:17:42.039 "qid": 0, 00:17:42.039 "state": "enabled", 00:17:42.039 "thread": "nvmf_tgt_poll_group_000", 00:17:42.039 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0", 00:17:42.039 "listen_address": { 00:17:42.039 "trtype": "TCP", 00:17:42.039 "adrfam": "IPv4", 00:17:42.039 "traddr": "10.0.0.3", 00:17:42.039 "trsvcid": "4420" 00:17:42.039 }, 00:17:42.039 "peer_address": { 00:17:42.039 "trtype": "TCP", 00:17:42.039 "adrfam": "IPv4", 00:17:42.039 "traddr": "10.0.0.1", 00:17:42.039 "trsvcid": "38080" 00:17:42.039 }, 00:17:42.039 "auth": { 00:17:42.039 "state": "completed", 00:17:42.039 "digest": "sha512", 00:17:42.039 "dhgroup": "ffdhe8192" 00:17:42.039 } 00:17:42.039 } 00:17:42.039 ]' 00:17:42.039 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:42.039 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:42.039 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:42.039 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:42.039 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:42.039 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:42.039 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.039 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.297 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjBiNzUzNDk5NTJiMDI4YWY5YjAxNGE2M2JhOWY3MGM5NjdhNjYwN2QxZGEyYjVk1hdk9g==: --dhchap-ctrl-secret DHHC-1:03:MWUxNmY4Njk5NmQ3NzkwNWEyYTk0ZjM5Yzg0NDU1MDkzYTMyYTkzZGRhOGYzYThmZDk0NjM3YmI2MzU3ZGU2OKKEk9A=: 00:17:42.298 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid 406d54d0-5e94-472a-a2b3-4291f3ac81e0 -l 0 --dhchap-secret DHHC-1:00:MjBiNzUzNDk5NTJiMDI4YWY5YjAxNGE2M2JhOWY3MGM5NjdhNjYwN2QxZGEyYjVk1hdk9g==: --dhchap-ctrl-secret DHHC-1:03:MWUxNmY4Njk5NmQ3NzkwNWEyYTk0ZjM5Yzg0NDU1MDkzYTMyYTkzZGRhOGYzYThmZDk0NjM3YmI2MzU3ZGU2OKKEk9A=: 00:17:42.865 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.865 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.865 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:17:42.865 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.865 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.865 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.865 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --dhchap-key key1 00:17:42.865 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.865 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.865 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.865 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:17:42.865 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:42.865 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:17:42.865 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:42.865 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:42.865 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:42.865 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:42.865 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:17:42.865 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:42.865 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:43.433 request: 00:17:43.433 { 00:17:43.433 "name": "nvme0", 00:17:43.433 "trtype": "tcp", 00:17:43.433 "traddr": "10.0.0.3", 00:17:43.433 "adrfam": "ipv4", 00:17:43.433 "trsvcid": "4420", 00:17:43.433 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:43.433 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0", 00:17:43.433 "prchk_reftag": false, 00:17:43.433 "prchk_guard": false, 00:17:43.433 "hdgst": false, 00:17:43.433 "ddgst": false, 00:17:43.433 "dhchap_key": "key2", 00:17:43.433 "allow_unrecognized_csi": false, 00:17:43.433 "method": "bdev_nvme_attach_controller", 00:17:43.433 "req_id": 1 00:17:43.433 } 00:17:43.433 Got JSON-RPC error response 00:17:43.433 response: 00:17:43.433 { 00:17:43.433 "code": -5, 00:17:43.433 "message": "Input/output error" 00:17:43.433 } 00:17:43.433 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:43.433 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:43.433 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:43.433 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:43.433 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:17:43.433 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.433 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.433 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.433 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:43.433 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.433 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.433 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.433 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:43.433 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:43.433 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:43.433 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:43.433 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:43.433 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:43.433 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:43.433 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:43.433 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:43.433 14:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:44.002 request: 00:17:44.002 { 00:17:44.002 "name": "nvme0", 00:17:44.002 "trtype": "tcp", 00:17:44.002 "traddr": "10.0.0.3", 00:17:44.002 "adrfam": "ipv4", 00:17:44.002 "trsvcid": "4420", 00:17:44.002 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:44.002 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0", 00:17:44.002 "prchk_reftag": false, 00:17:44.002 "prchk_guard": false, 00:17:44.002 "hdgst": false, 00:17:44.002 "ddgst": false, 00:17:44.002 "dhchap_key": "key1", 00:17:44.002 "dhchap_ctrlr_key": "ckey2", 00:17:44.002 "allow_unrecognized_csi": false, 00:17:44.002 "method": "bdev_nvme_attach_controller", 00:17:44.002 "req_id": 1 00:17:44.002 } 00:17:44.002 Got JSON-RPC error response 00:17:44.002 response: 00:17:44.002 { 00:17:44.002 "code": -5, 00:17:44.002 "message": "Input/output error" 00:17:44.002 } 00:17:44.002 14:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:44.002 14:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:44.002 14:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:44.002 14:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:44.002 14:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:17:44.002 14:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.002 14:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.002 14:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.002 14:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --dhchap-key key1 00:17:44.002 14:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.003 14:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.003 14:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.003 14:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:44.003 14:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:44.003 14:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:44.003 14:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:44.003 14:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:44.003 14:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:44.003 14:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:44.003 14:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:44.003 14:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:44.003 14:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:44.571 request: 00:17:44.571 { 00:17:44.571 "name": "nvme0", 00:17:44.571 "trtype": "tcp", 00:17:44.571 "traddr": "10.0.0.3", 00:17:44.571 "adrfam": "ipv4", 00:17:44.571 "trsvcid": "4420", 00:17:44.571 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:44.571 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0", 00:17:44.571 "prchk_reftag": false, 00:17:44.571 "prchk_guard": false, 00:17:44.571 "hdgst": false, 00:17:44.571 "ddgst": false, 00:17:44.571 "dhchap_key": "key1", 00:17:44.571 "dhchap_ctrlr_key": "ckey1", 00:17:44.571 "allow_unrecognized_csi": false, 00:17:44.571 "method": "bdev_nvme_attach_controller", 00:17:44.571 "req_id": 1 00:17:44.571 } 00:17:44.571 Got JSON-RPC error response 00:17:44.571 response: 00:17:44.571 { 00:17:44.571 "code": -5, 00:17:44.571 "message": "Input/output error" 00:17:44.571 } 00:17:44.571 14:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:44.571 14:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:44.571 14:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:44.571 14:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:44.571 14:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:17:44.571 14:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.571 14:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.571 14:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.571 14:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 70706 00:17:44.571 14:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 70706 ']' 00:17:44.571 14:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 70706 00:17:44.571 14:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:17:44.571 14:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:44.571 14:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70706 00:17:44.571 killing process with pid 70706 00:17:44.571 14:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:44.571 14:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:44.571 14:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70706' 00:17:44.571 14:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 70706 00:17:44.571 14:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 70706 00:17:45.951 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:17:45.951 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:45.951 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:45.951 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.951 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:17:45.951 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=73567 00:17:45.951 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 73567 00:17:45.951 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 73567 ']' 00:17:45.951 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:45.951 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:45.951 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:45.951 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:45.951 14:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.887 14:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:46.887 14:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:17:46.887 14:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:46.887 14:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:46.887 14:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.887 14:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:46.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:46.887 14:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:46.887 14:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 73567 00:17:46.887 14:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 73567 ']' 00:17:46.887 14:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:46.887 14:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:46.887 14:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:46.887 14:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:46.887 14:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.146 14:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:47.146 14:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:17:47.146 14:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:17:47.146 14:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.146 14:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.751 null0 00:17:47.751 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.751 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:47.751 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.kap 00:17:47.751 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.751 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.751 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.751 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.1C8 ]] 00:17:47.751 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.1C8 00:17:47.751 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.751 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.751 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.751 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:47.751 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.ifU 00:17:47.751 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.751 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.751 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.751 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.ZZg ]] 00:17:47.751 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ZZg 00:17:47.751 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.751 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.751 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.751 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:47.751 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.56O 00:17:47.751 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.751 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.751 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.751 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.b9F ]] 00:17:47.751 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.b9F 00:17:47.751 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.751 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.751 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.751 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:47.751 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.PEB 00:17:47.751 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.751 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.751 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.751 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:17:47.751 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:17:47.751 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:47.751 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:47.751 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:47.751 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:47.751 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:47.751 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --dhchap-key key3 00:17:47.751 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.751 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.752 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.752 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:47.752 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:47.752 14:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:48.688 nvme0n1 00:17:48.688 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:48.688 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:48.688 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.688 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.688 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:48.688 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.688 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.688 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.688 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:48.688 { 00:17:48.688 "cntlid": 1, 00:17:48.688 "qid": 0, 00:17:48.688 "state": "enabled", 00:17:48.688 "thread": "nvmf_tgt_poll_group_000", 00:17:48.688 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0", 00:17:48.688 "listen_address": { 00:17:48.688 "trtype": "TCP", 00:17:48.688 "adrfam": "IPv4", 00:17:48.688 "traddr": "10.0.0.3", 00:17:48.688 "trsvcid": "4420" 00:17:48.688 }, 00:17:48.688 "peer_address": { 00:17:48.688 "trtype": "TCP", 00:17:48.688 "adrfam": "IPv4", 00:17:48.688 "traddr": "10.0.0.1", 00:17:48.688 "trsvcid": "51822" 00:17:48.688 }, 00:17:48.688 "auth": { 00:17:48.688 "state": "completed", 00:17:48.688 "digest": "sha512", 00:17:48.688 "dhgroup": "ffdhe8192" 00:17:48.688 } 00:17:48.688 } 00:17:48.688 ]' 00:17:48.688 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:48.946 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:48.947 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:48.947 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:48.947 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:48.947 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:48.947 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.947 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:49.206 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjYwYjE4ODZiZGUzYmE2ZWI3NGMxZDE3NjI3YWRhYjhkMDVlM2Q4ZDNjYTI1MWVhZjkwNzY0YWM3ZmMyZmQzZImPBGY=: 00:17:49.206 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid 406d54d0-5e94-472a-a2b3-4291f3ac81e0 -l 0 --dhchap-secret DHHC-1:03:YjYwYjE4ODZiZGUzYmE2ZWI3NGMxZDE3NjI3YWRhYjhkMDVlM2Q4ZDNjYTI1MWVhZjkwNzY0YWM3ZmMyZmQzZImPBGY=: 00:17:49.774 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:49.774 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:49.774 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:17:49.774 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.774 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.774 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.774 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --dhchap-key key3 00:17:49.774 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.774 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.774 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.774 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:17:49.774 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:17:50.033 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:50.033 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:50.033 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:50.033 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:50.033 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:50.033 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:50.033 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:50.033 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:50.033 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:50.033 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:50.291 request: 00:17:50.291 { 00:17:50.291 "name": "nvme0", 00:17:50.291 "trtype": "tcp", 00:17:50.291 "traddr": "10.0.0.3", 00:17:50.291 "adrfam": "ipv4", 00:17:50.291 "trsvcid": "4420", 00:17:50.291 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:50.291 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0", 00:17:50.291 "prchk_reftag": false, 00:17:50.291 "prchk_guard": false, 00:17:50.291 "hdgst": false, 00:17:50.291 "ddgst": false, 00:17:50.291 "dhchap_key": "key3", 00:17:50.291 "allow_unrecognized_csi": false, 00:17:50.291 "method": "bdev_nvme_attach_controller", 00:17:50.291 "req_id": 1 00:17:50.291 } 00:17:50.291 Got JSON-RPC error response 00:17:50.291 response: 00:17:50.291 { 00:17:50.291 "code": -5, 00:17:50.291 "message": "Input/output error" 00:17:50.291 } 00:17:50.291 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:50.291 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:50.291 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:50.291 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:50.291 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:17:50.291 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:17:50.291 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:50.291 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:50.550 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:50.550 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:50.550 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:50.550 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:50.550 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:50.550 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:50.550 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:50.550 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:50.550 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:50.550 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:50.810 request: 00:17:50.810 { 00:17:50.810 "name": "nvme0", 00:17:50.810 "trtype": "tcp", 00:17:50.810 "traddr": "10.0.0.3", 00:17:50.810 "adrfam": "ipv4", 00:17:50.810 "trsvcid": "4420", 00:17:50.810 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:50.810 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0", 00:17:50.810 "prchk_reftag": false, 00:17:50.810 "prchk_guard": false, 00:17:50.810 "hdgst": false, 00:17:50.810 "ddgst": false, 00:17:50.810 "dhchap_key": "key3", 00:17:50.810 "allow_unrecognized_csi": false, 00:17:50.810 "method": "bdev_nvme_attach_controller", 00:17:50.810 "req_id": 1 00:17:50.810 } 00:17:50.810 Got JSON-RPC error response 00:17:50.810 response: 00:17:50.810 { 00:17:50.810 "code": -5, 00:17:50.810 "message": "Input/output error" 00:17:50.810 } 00:17:50.810 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:50.810 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:50.810 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:50.810 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:50.810 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:50.810 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:17:50.810 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:50.810 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:50.810 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:50.810 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:50.810 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:17:50.810 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.810 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.810 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.810 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:17:50.810 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.810 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.069 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.069 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:51.069 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:51.069 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:51.069 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:51.069 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:51.069 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:51.069 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:51.069 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:51.069 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:51.069 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:51.329 request: 00:17:51.329 { 00:17:51.329 "name": "nvme0", 00:17:51.329 "trtype": "tcp", 00:17:51.329 "traddr": "10.0.0.3", 00:17:51.329 "adrfam": "ipv4", 00:17:51.329 "trsvcid": "4420", 00:17:51.329 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:51.329 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0", 00:17:51.329 "prchk_reftag": false, 00:17:51.329 "prchk_guard": false, 00:17:51.329 "hdgst": false, 00:17:51.329 "ddgst": false, 00:17:51.329 "dhchap_key": "key0", 00:17:51.329 "dhchap_ctrlr_key": "key1", 00:17:51.329 "allow_unrecognized_csi": false, 00:17:51.329 "method": "bdev_nvme_attach_controller", 00:17:51.329 "req_id": 1 00:17:51.329 } 00:17:51.329 Got JSON-RPC error response 00:17:51.329 response: 00:17:51.329 { 00:17:51.329 "code": -5, 00:17:51.329 "message": "Input/output error" 00:17:51.329 } 00:17:51.329 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:51.329 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:51.329 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:51.329 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:51.329 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:17:51.329 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:51.329 14:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:51.588 nvme0n1 00:17:51.588 14:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:17:51.588 14:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:17:51.588 14:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:51.847 14:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.848 14:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:51.848 14:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:52.116 14:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --dhchap-key key1 00:17:52.116 14:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.116 14:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.116 14:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.116 14:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:52.116 14:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:52.116 14:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:53.052 nvme0n1 00:17:53.052 14:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:17:53.052 14:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.052 14:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:17:53.052 14:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.052 14:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:53.052 14:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.052 14:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.052 14:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.311 14:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:17:53.311 14:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.311 14:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:17:53.311 14:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.311 14:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NTVjZjM5ZDQ2MWRmOGUzODllYjM2MzVlNDMwOTc1NjA0NzQ0OWFmOWIyZmZlYzlmWliPlg==: --dhchap-ctrl-secret DHHC-1:03:YjYwYjE4ODZiZGUzYmE2ZWI3NGMxZDE3NjI3YWRhYjhkMDVlM2Q4ZDNjYTI1MWVhZjkwNzY0YWM3ZmMyZmQzZImPBGY=: 00:17:53.311 14:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid 406d54d0-5e94-472a-a2b3-4291f3ac81e0 -l 0 --dhchap-secret DHHC-1:02:NTVjZjM5ZDQ2MWRmOGUzODllYjM2MzVlNDMwOTc1NjA0NzQ0OWFmOWIyZmZlYzlmWliPlg==: --dhchap-ctrl-secret DHHC-1:03:YjYwYjE4ODZiZGUzYmE2ZWI3NGMxZDE3NjI3YWRhYjhkMDVlM2Q4ZDNjYTI1MWVhZjkwNzY0YWM3ZmMyZmQzZImPBGY=: 00:17:53.879 14:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:17:53.879 14:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:17:53.879 14:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:17:53.879 14:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:17:53.879 14:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:17:53.879 14:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:17:53.879 14:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:17:53.879 14:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.879 14:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.138 14:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:17:54.138 14:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:54.138 14:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:17:54.138 14:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:54.138 14:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:54.138 14:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:54.138 14:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:54.138 14:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:54.138 14:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:54.138 14:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:54.716 request: 00:17:54.716 { 00:17:54.716 "name": "nvme0", 00:17:54.716 "trtype": "tcp", 00:17:54.716 "traddr": "10.0.0.3", 00:17:54.716 "adrfam": "ipv4", 00:17:54.716 "trsvcid": "4420", 00:17:54.716 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:54.716 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0", 00:17:54.716 "prchk_reftag": false, 00:17:54.716 "prchk_guard": false, 00:17:54.716 "hdgst": false, 00:17:54.716 "ddgst": false, 00:17:54.716 "dhchap_key": "key1", 00:17:54.716 "allow_unrecognized_csi": false, 00:17:54.716 "method": "bdev_nvme_attach_controller", 00:17:54.716 "req_id": 1 00:17:54.716 } 00:17:54.716 Got JSON-RPC error response 00:17:54.716 response: 00:17:54.716 { 00:17:54.716 "code": -5, 00:17:54.716 "message": "Input/output error" 00:17:54.716 } 00:17:54.716 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:54.716 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:54.716 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:54.716 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:54.716 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:54.716 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:54.716 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:55.653 nvme0n1 00:17:55.653 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:17:55.653 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:17:55.653 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.912 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.912 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.912 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.172 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:17:56.172 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.172 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.172 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.172 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:17:56.172 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:56.172 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:56.431 nvme0n1 00:17:56.431 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:17:56.431 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:17:56.431 14:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.693 14:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.693 14:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.693 14:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.952 14:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:56.952 14:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.952 14:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.952 14:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.952 14:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:YTkwMmMwOGY3YTdkMGZjNWQ0ODJhMDk2ZGRmZjlmMjZ64U1I: '' 2s 00:17:56.952 14:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:56.952 14:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:56.952 14:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:YTkwMmMwOGY3YTdkMGZjNWQ0ODJhMDk2ZGRmZjlmMjZ64U1I: 00:17:56.952 14:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:17:56.952 14:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:56.952 14:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:56.952 14:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:YTkwMmMwOGY3YTdkMGZjNWQ0ODJhMDk2ZGRmZjlmMjZ64U1I: ]] 00:17:56.952 14:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:YTkwMmMwOGY3YTdkMGZjNWQ0ODJhMDk2ZGRmZjlmMjZ64U1I: 00:17:56.952 14:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:17:56.952 14:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:56.952 14:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:58.871 14:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:17:58.871 14:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:17:58.871 14:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:17:58.871 14:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:17:58.871 14:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:17:58.871 14:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:17:58.871 14:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:17:58.871 14:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --dhchap-key key1 --dhchap-ctrlr-key key2 00:17:58.871 14:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.871 14:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.871 14:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.871 14:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NTVjZjM5ZDQ2MWRmOGUzODllYjM2MzVlNDMwOTc1NjA0NzQ0OWFmOWIyZmZlYzlmWliPlg==: 2s 00:17:58.871 14:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:58.871 14:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:58.871 14:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:17:58.871 14:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NTVjZjM5ZDQ2MWRmOGUzODllYjM2MzVlNDMwOTc1NjA0NzQ0OWFmOWIyZmZlYzlmWliPlg==: 00:17:58.871 14:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:58.871 14:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:58.871 14:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:17:58.871 14:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NTVjZjM5ZDQ2MWRmOGUzODllYjM2MzVlNDMwOTc1NjA0NzQ0OWFmOWIyZmZlYzlmWliPlg==: ]] 00:17:58.871 14:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NTVjZjM5ZDQ2MWRmOGUzODllYjM2MzVlNDMwOTc1NjA0NzQ0OWFmOWIyZmZlYzlmWliPlg==: 00:17:58.871 14:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:58.871 14:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:01.406 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:18:01.406 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:18:01.406 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:18:01.406 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:18:01.406 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:18:01.406 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:18:01.406 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:18:01.407 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.407 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.407 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:01.407 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.407 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.407 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.407 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:01.407 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:01.407 14:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:01.975 nvme0n1 00:18:01.975 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:01.975 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.975 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.975 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.975 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:01.975 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:02.540 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:18:02.540 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.540 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:18:02.540 14:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.540 14:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:18:02.540 14:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.540 14:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.799 14:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.799 14:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:18:02.799 14:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:18:02.799 14:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:18:02.799 14:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.799 14:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:18:03.057 14:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.057 14:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:03.057 14:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.057 14:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.057 14:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.057 14:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:03.057 14:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:03.057 14:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:03.057 14:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:03.057 14:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:03.057 14:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:03.057 14:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:03.057 14:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:03.057 14:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:03.625 request: 00:18:03.625 { 00:18:03.625 "name": "nvme0", 00:18:03.625 "dhchap_key": "key1", 00:18:03.625 "dhchap_ctrlr_key": "key3", 00:18:03.625 "method": "bdev_nvme_set_keys", 00:18:03.625 "req_id": 1 00:18:03.625 } 00:18:03.625 Got JSON-RPC error response 00:18:03.625 response: 00:18:03.625 { 00:18:03.625 "code": -13, 00:18:03.625 "message": "Permission denied" 00:18:03.625 } 00:18:03.625 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:03.625 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:03.625 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:03.625 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:03.625 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:03.625 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:03.625 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.192 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:18:04.192 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:18:05.129 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:05.129 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:05.129 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.387 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:18:05.387 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:05.387 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.387 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.387 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.387 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:05.387 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:05.387 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:06.324 nvme0n1 00:18:06.324 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:06.324 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.324 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.324 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.324 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:06.324 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:06.324 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:06.324 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:06.324 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:06.324 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:06.324 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:06.324 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:06.324 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:06.597 request: 00:18:06.597 { 00:18:06.597 "name": "nvme0", 00:18:06.597 "dhchap_key": "key2", 00:18:06.597 "dhchap_ctrlr_key": "key0", 00:18:06.597 "method": "bdev_nvme_set_keys", 00:18:06.597 "req_id": 1 00:18:06.597 } 00:18:06.597 Got JSON-RPC error response 00:18:06.597 response: 00:18:06.597 { 00:18:06.597 "code": -13, 00:18:06.597 "message": "Permission denied" 00:18:06.597 } 00:18:06.857 14:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:06.857 14:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:06.857 14:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:06.857 14:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:06.857 14:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:06.857 14:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.857 14:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:06.857 14:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:18:06.857 14:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:18:08.236 14:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:08.236 14:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:08.236 14:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.236 14:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:18:08.236 14:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:18:08.236 14:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:18:08.236 14:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 70734 00:18:08.236 14:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 70734 ']' 00:18:08.236 14:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 70734 00:18:08.236 14:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:18:08.236 14:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:08.236 14:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70734 00:18:08.236 14:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:08.236 14:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:08.236 killing process with pid 70734 00:18:08.236 14:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70734' 00:18:08.236 14:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 70734 00:18:08.236 14:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 70734 00:18:10.771 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:18:10.771 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:10.771 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:18:10.771 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:10.771 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:18:10.771 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:10.771 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:10.771 rmmod nvme_tcp 00:18:10.771 rmmod nvme_fabrics 00:18:10.771 rmmod nvme_keyring 00:18:10.771 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:10.771 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:18:10.771 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:18:10.771 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 73567 ']' 00:18:10.771 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 73567 00:18:10.771 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 73567 ']' 00:18:10.771 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 73567 00:18:10.771 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:18:10.771 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:10.771 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73567 00:18:10.771 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:10.771 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:10.771 killing process with pid 73567 00:18:10.771 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73567' 00:18:10.771 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 73567 00:18:10.771 14:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 73567 00:18:12.147 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:12.148 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:12.148 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:12.148 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:18:12.148 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:18:12.148 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:12.148 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:18:12.148 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:12.148 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:12.148 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:12.148 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:12.148 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:12.148 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:12.148 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:12.148 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:12.148 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:12.148 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:12.148 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:12.148 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:12.148 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:12.148 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:12.406 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:12.406 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:12.406 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:12.406 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:12.406 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:12.406 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:18:12.406 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.kap /tmp/spdk.key-sha256.ifU /tmp/spdk.key-sha384.56O /tmp/spdk.key-sha512.PEB /tmp/spdk.key-sha512.1C8 /tmp/spdk.key-sha384.ZZg /tmp/spdk.key-sha256.b9F '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:18:12.406 00:18:12.406 real 2m53.467s 00:18:12.406 user 6m35.758s 00:18:12.406 sys 0m35.960s 00:18:12.406 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:12.406 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.406 ************************************ 00:18:12.406 END TEST nvmf_auth_target 00:18:12.406 ************************************ 00:18:12.406 14:22:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:18:12.406 14:22:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:12.406 14:22:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:18:12.406 14:22:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:12.406 14:22:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:12.406 ************************************ 00:18:12.406 START TEST nvmf_bdevio_no_huge 00:18:12.406 ************************************ 00:18:12.406 14:22:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:12.667 * Looking for test storage... 00:18:12.667 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:12.667 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:12.667 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lcov --version 00:18:12.667 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:12.667 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:12.667 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:12.667 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:12.667 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:12.667 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:18:12.667 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:18:12.667 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:18:12.667 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:18:12.667 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:18:12.667 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:18:12.667 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:18:12.667 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:12.667 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:18:12.667 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:18:12.667 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:12.667 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:12.667 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:18:12.667 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:18:12.667 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:12.667 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:18:12.667 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:18:12.667 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:18:12.667 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:18:12.667 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:12.667 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:18:12.667 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:18:12.667 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:12.667 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:12.667 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:18:12.667 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:12.667 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:12.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:12.667 --rc genhtml_branch_coverage=1 00:18:12.667 --rc genhtml_function_coverage=1 00:18:12.667 --rc genhtml_legend=1 00:18:12.667 --rc geninfo_all_blocks=1 00:18:12.667 --rc geninfo_unexecuted_blocks=1 00:18:12.667 00:18:12.667 ' 00:18:12.667 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:12.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:12.667 --rc genhtml_branch_coverage=1 00:18:12.667 --rc genhtml_function_coverage=1 00:18:12.667 --rc genhtml_legend=1 00:18:12.667 --rc geninfo_all_blocks=1 00:18:12.667 --rc geninfo_unexecuted_blocks=1 00:18:12.667 00:18:12.667 ' 00:18:12.667 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:12.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:12.667 --rc genhtml_branch_coverage=1 00:18:12.667 --rc genhtml_function_coverage=1 00:18:12.667 --rc genhtml_legend=1 00:18:12.667 --rc geninfo_all_blocks=1 00:18:12.667 --rc geninfo_unexecuted_blocks=1 00:18:12.667 00:18:12.667 ' 00:18:12.667 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:12.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:12.667 --rc genhtml_branch_coverage=1 00:18:12.667 --rc genhtml_function_coverage=1 00:18:12.667 --rc genhtml_legend=1 00:18:12.667 --rc geninfo_all_blocks=1 00:18:12.667 --rc geninfo_unexecuted_blocks=1 00:18:12.667 00:18:12.667 ' 00:18:12.667 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:12.667 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:18:12.667 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:12.667 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:12.667 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:12.667 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:12.667 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:12.667 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:12.667 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:12.667 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:12.667 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:12.667 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:12.667 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:18:12.667 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:18:12.667 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:12.667 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:12.667 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:12.667 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:12.667 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:12.667 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:18:12.667 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:12.667 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:12.667 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:12.667 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:12.667 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:12.667 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:12.667 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:18:12.667 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:12.667 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:18:12.668 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:12.668 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:12.668 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:12.668 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:12.668 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:12.668 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:12.668 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:12.668 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:12.668 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:12.668 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:12.668 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:12.668 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:12.668 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:18:12.668 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:12.668 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:12.668 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:12.668 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:12.668 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:12.668 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:12.668 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:12.668 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:12.668 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:12.668 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:12.668 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:12.668 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:12.668 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:12.668 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:12.668 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:12.668 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:12.668 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:12.668 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:12.668 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:12.668 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:12.668 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:12.668 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:12.668 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:12.668 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:12.668 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:12.668 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:12.668 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:12.668 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:12.668 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:12.668 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:12.668 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:12.668 Cannot find device "nvmf_init_br" 00:18:12.668 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:18:12.668 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:12.668 Cannot find device "nvmf_init_br2" 00:18:12.668 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:18:12.668 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:12.668 Cannot find device "nvmf_tgt_br" 00:18:12.668 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:18:12.668 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:12.668 Cannot find device "nvmf_tgt_br2" 00:18:12.668 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:18:12.668 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:12.668 Cannot find device "nvmf_init_br" 00:18:12.668 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:18:12.668 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:12.668 Cannot find device "nvmf_init_br2" 00:18:12.668 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:18:12.668 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:12.927 Cannot find device "nvmf_tgt_br" 00:18:12.927 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:18:12.927 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:12.927 Cannot find device "nvmf_tgt_br2" 00:18:12.927 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:18:12.927 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:12.927 Cannot find device "nvmf_br" 00:18:12.927 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:18:12.927 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:12.927 Cannot find device "nvmf_init_if" 00:18:12.927 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:18:12.927 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:12.927 Cannot find device "nvmf_init_if2" 00:18:12.927 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:18:12.927 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:12.927 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:12.927 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:18:12.927 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:12.927 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:12.927 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:18:12.927 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:12.927 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:12.927 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:12.927 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:12.927 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:12.927 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:12.927 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:12.927 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:12.927 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:12.927 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:12.927 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:12.927 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:12.927 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:12.927 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:12.928 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:12.928 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:12.928 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:12.928 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:12.928 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:12.928 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:12.928 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:12.928 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:12.928 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:13.187 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:13.187 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:13.187 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:13.187 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:13.187 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:13.187 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:13.187 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:13.187 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:13.187 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:13.187 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:13.187 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:13.187 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:18:13.187 00:18:13.187 --- 10.0.0.3 ping statistics --- 00:18:13.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:13.187 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:18:13.187 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:13.187 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:13.187 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.083 ms 00:18:13.187 00:18:13.187 --- 10.0.0.4 ping statistics --- 00:18:13.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:13.187 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:18:13.187 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:13.187 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:13.187 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:18:13.187 00:18:13.187 --- 10.0.0.1 ping statistics --- 00:18:13.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:13.187 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:18:13.187 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:13.187 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:13.187 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.098 ms 00:18:13.187 00:18:13.187 --- 10.0.0.2 ping statistics --- 00:18:13.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:13.187 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:18:13.187 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:13.187 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@461 -- # return 0 00:18:13.187 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:13.187 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:13.187 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:13.187 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:13.187 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:13.187 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:13.187 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:13.187 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:13.187 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:13.187 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:13.187 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:13.187 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=74219 00:18:13.187 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:18:13.187 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 74219 00:18:13.187 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # '[' -z 74219 ']' 00:18:13.187 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:13.187 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:13.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:13.187 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:13.187 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:13.187 14:22:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:13.446 [2024-11-06 14:22:40.857416] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:18:13.446 [2024-11-06 14:22:40.857571] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:18:13.446 [2024-11-06 14:22:41.073048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:13.706 [2024-11-06 14:22:41.219613] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:13.706 [2024-11-06 14:22:41.219688] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:13.706 [2024-11-06 14:22:41.219725] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:13.706 [2024-11-06 14:22:41.219747] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:13.706 [2024-11-06 14:22:41.219758] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:13.706 [2024-11-06 14:22:41.221755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:18:13.706 [2024-11-06 14:22:41.221972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:18:13.706 [2024-11-06 14:22:41.222317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:18:13.706 [2024-11-06 14:22:41.222435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:13.965 [2024-11-06 14:22:41.405550] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:14.232 14:22:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:14.232 14:22:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@866 -- # return 0 00:18:14.232 14:22:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:14.232 14:22:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:14.232 14:22:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:14.232 14:22:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:14.232 14:22:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:14.232 14:22:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.232 14:22:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:14.232 [2024-11-06 14:22:41.778151] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:14.232 14:22:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.232 14:22:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:14.232 14:22:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.232 14:22:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:14.232 Malloc0 00:18:14.232 14:22:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.232 14:22:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:14.232 14:22:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.232 14:22:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:14.492 14:22:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.492 14:22:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:14.492 14:22:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.492 14:22:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:14.492 14:22:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.492 14:22:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:14.492 14:22:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.492 14:22:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:14.492 [2024-11-06 14:22:41.891903] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:14.492 14:22:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.492 14:22:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:18:14.492 14:22:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:14.492 14:22:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:18:14.492 14:22:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:18:14.492 14:22:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:14.492 14:22:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:14.492 { 00:18:14.492 "params": { 00:18:14.492 "name": "Nvme$subsystem", 00:18:14.492 "trtype": "$TEST_TRANSPORT", 00:18:14.492 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:14.492 "adrfam": "ipv4", 00:18:14.492 "trsvcid": "$NVMF_PORT", 00:18:14.492 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:14.492 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:14.492 "hdgst": ${hdgst:-false}, 00:18:14.492 "ddgst": ${ddgst:-false} 00:18:14.492 }, 00:18:14.492 "method": "bdev_nvme_attach_controller" 00:18:14.492 } 00:18:14.492 EOF 00:18:14.492 )") 00:18:14.492 14:22:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:18:14.492 14:22:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:18:14.492 14:22:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:18:14.492 14:22:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:18:14.492 "params": { 00:18:14.492 "name": "Nvme1", 00:18:14.492 "trtype": "tcp", 00:18:14.492 "traddr": "10.0.0.3", 00:18:14.492 "adrfam": "ipv4", 00:18:14.492 "trsvcid": "4420", 00:18:14.492 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:14.492 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:14.492 "hdgst": false, 00:18:14.492 "ddgst": false 00:18:14.492 }, 00:18:14.492 "method": "bdev_nvme_attach_controller" 00:18:14.492 }' 00:18:14.492 [2024-11-06 14:22:42.001166] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:18:14.492 [2024-11-06 14:22:42.001705] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid74255 ] 00:18:14.752 [2024-11-06 14:22:42.213623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:14.752 [2024-11-06 14:22:42.361585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:14.752 [2024-11-06 14:22:42.361779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:14.752 [2024-11-06 14:22:42.361821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:15.014 [2024-11-06 14:22:42.559321] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:15.582 I/O targets: 00:18:15.582 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:15.582 00:18:15.582 00:18:15.582 CUnit - A unit testing framework for C - Version 2.1-3 00:18:15.582 http://cunit.sourceforge.net/ 00:18:15.582 00:18:15.582 00:18:15.582 Suite: bdevio tests on: Nvme1n1 00:18:15.582 Test: blockdev write read block ...passed 00:18:15.582 Test: blockdev write zeroes read block ...passed 00:18:15.582 Test: blockdev write zeroes read no split ...passed 00:18:15.582 Test: blockdev write zeroes read split ...passed 00:18:15.582 Test: blockdev write zeroes read split partial ...passed 00:18:15.582 Test: blockdev reset ...[2024-11-06 14:22:42.982052] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:18:15.582 [2024-11-06 14:22:42.982234] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000029c00 (9): Bad file descriptor 00:18:15.582 [2024-11-06 14:22:42.999828] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:18:15.582 passed 00:18:15.582 Test: blockdev write read 8 blocks ...passed 00:18:15.582 Test: blockdev write read size > 128k ...passed 00:18:15.582 Test: blockdev write read invalid size ...passed 00:18:15.582 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:15.582 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:15.582 Test: blockdev write read max offset ...passed 00:18:15.582 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:15.582 Test: blockdev writev readv 8 blocks ...passed 00:18:15.582 Test: blockdev writev readv 30 x 1block ...passed 00:18:15.582 Test: blockdev writev readv block ...passed 00:18:15.582 Test: blockdev writev readv size > 128k ...passed 00:18:15.582 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:15.582 Test: blockdev comparev and writev ...passed 00:18:15.582 Test: blockdev nvme passthru rw ...passed 00:18:15.582 Test: blockdev nvme passthru vendor specific ...passed 00:18:15.582 Test: blockdev nvme admin passthru ...[2024-11-06 14:22:43.012018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:15.582 [2024-11-06 14:22:43.012086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.582 [2024-11-06 14:22:43.012113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:15.582 [2024-11-06 14:22:43.012131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:15.582 [2024-11-06 14:22:43.012477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:15.582 [2024-11-06 14:22:43.012501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:15.582 [2024-11-06 14:22:43.012520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:15.582 [2024-11-06 14:22:43.012537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:15.582 [2024-11-06 14:22:43.012853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:15.582 [2024-11-06 14:22:43.012876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:15.582 [2024-11-06 14:22:43.012896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:15.582 [2024-11-06 14:22:43.012920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:15.583 [2024-11-06 14:22:43.013231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:15.583 [2024-11-06 14:22:43.013253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:15.583 [2024-11-06 14:22:43.013273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:15.583 [2024-11-06 14:22:43.013289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:15.583 [2024-11-06 14:22:43.014142] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:15.583 [2024-11-06 14:22:43.014174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:15.583 [2024-11-06 14:22:43.014302] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:15.583 [2024-11-06 14:22:43.014324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:15.583 [2024-11-06 14:22:43.014482] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:15.583 [2024-11-06 14:22:43.014504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:15.583 [2024-11-06 14:22:43.014627] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:15.583 [2024-11-06 14:22:43.014647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:15.583 passed 00:18:15.583 Test: blockdev copy ...passed 00:18:15.583 00:18:15.583 Run Summary: Type Total Ran Passed Failed Inactive 00:18:15.583 suites 1 1 n/a 0 0 00:18:15.583 tests 23 23 23 0 0 00:18:15.583 asserts 152 152 152 0 n/a 00:18:15.583 00:18:15.583 Elapsed time = 0.295 seconds 00:18:16.520 14:22:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:16.520 14:22:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.520 14:22:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:16.520 14:22:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.520 14:22:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:16.520 14:22:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:18:16.520 14:22:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:16.520 14:22:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:18:16.520 14:22:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:16.520 14:22:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:18:16.520 14:22:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:16.520 14:22:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:16.520 rmmod nvme_tcp 00:18:16.520 rmmod nvme_fabrics 00:18:16.520 rmmod nvme_keyring 00:18:16.520 14:22:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:16.520 14:22:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:18:16.520 14:22:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:18:16.520 14:22:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 74219 ']' 00:18:16.520 14:22:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 74219 00:18:16.520 14:22:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # '[' -z 74219 ']' 00:18:16.520 14:22:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # kill -0 74219 00:18:16.520 14:22:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # uname 00:18:16.520 14:22:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:16.520 14:22:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74219 00:18:16.520 14:22:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:18:16.520 14:22:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:18:16.520 killing process with pid 74219 00:18:16.520 14:22:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74219' 00:18:16.520 14:22:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@971 -- # kill 74219 00:18:16.520 14:22:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@976 -- # wait 74219 00:18:17.457 14:22:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:17.457 14:22:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:17.457 14:22:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:17.458 14:22:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:18:17.458 14:22:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:18:17.458 14:22:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:17.458 14:22:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:18:17.458 14:22:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:17.458 14:22:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:17.458 14:22:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:17.458 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:17.458 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:17.458 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:17.458 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:17.458 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:17.458 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:17.458 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:17.458 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:17.717 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:17.717 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:17.717 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:17.717 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:17.717 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:17.717 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:17.717 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:17.717 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:17.717 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:18:17.717 00:18:17.717 real 0m5.351s 00:18:17.717 user 0m17.926s 00:18:17.717 sys 0m2.047s 00:18:17.717 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:17.717 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:17.717 ************************************ 00:18:17.717 END TEST nvmf_bdevio_no_huge 00:18:17.717 ************************************ 00:18:17.976 14:22:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:17.976 14:22:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:18:17.976 14:22:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:17.976 14:22:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:17.976 ************************************ 00:18:17.976 START TEST nvmf_tls 00:18:17.976 ************************************ 00:18:17.976 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:17.976 * Looking for test storage... 00:18:17.976 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:17.976 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:17.976 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:17.976 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lcov --version 00:18:17.976 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:17.976 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:17.976 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:17.977 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:17.977 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:18:17.977 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:18:17.977 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:18:17.977 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:18:17.977 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:18:17.977 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:18:17.977 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:18:17.977 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:17.977 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:18:17.977 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:18:17.977 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:17.977 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:17.977 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:18:17.977 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:18:17.977 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:17.977 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:18:17.977 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:18:17.977 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:18:17.977 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:18:17.977 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:17.977 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:18:17.977 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:18:17.977 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:17.977 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:17.977 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:18:17.977 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:17.977 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:17.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:17.977 --rc genhtml_branch_coverage=1 00:18:17.977 --rc genhtml_function_coverage=1 00:18:17.977 --rc genhtml_legend=1 00:18:17.977 --rc geninfo_all_blocks=1 00:18:17.977 --rc geninfo_unexecuted_blocks=1 00:18:17.977 00:18:17.977 ' 00:18:17.977 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:17.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:17.977 --rc genhtml_branch_coverage=1 00:18:17.977 --rc genhtml_function_coverage=1 00:18:17.977 --rc genhtml_legend=1 00:18:17.977 --rc geninfo_all_blocks=1 00:18:17.977 --rc geninfo_unexecuted_blocks=1 00:18:17.977 00:18:17.977 ' 00:18:17.977 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:17.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:17.977 --rc genhtml_branch_coverage=1 00:18:17.977 --rc genhtml_function_coverage=1 00:18:17.977 --rc genhtml_legend=1 00:18:17.977 --rc geninfo_all_blocks=1 00:18:17.977 --rc geninfo_unexecuted_blocks=1 00:18:17.977 00:18:17.977 ' 00:18:17.977 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:17.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:17.977 --rc genhtml_branch_coverage=1 00:18:17.977 --rc genhtml_function_coverage=1 00:18:17.977 --rc genhtml_legend=1 00:18:17.977 --rc geninfo_all_blocks=1 00:18:17.977 --rc geninfo_unexecuted_blocks=1 00:18:17.977 00:18:17.977 ' 00:18:17.977 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:17.977 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:18:18.238 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:18.238 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:18.238 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:18.238 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:18.238 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:18.238 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:18.238 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:18.238 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:18.238 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:18.238 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:18.238 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:18:18.238 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:18:18.238 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:18.238 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:18.238 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:18.238 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:18.238 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:18.238 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:18:18.238 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:18.238 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:18.238 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:18.238 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.238 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.239 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.239 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:18:18.239 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.239 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:18:18.239 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:18.239 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:18.239 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:18.239 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:18.239 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:18.239 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:18.239 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:18.239 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:18.239 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:18.239 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:18.239 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:18.239 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:18:18.239 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:18.239 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:18.239 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:18.239 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:18.239 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:18.239 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:18.239 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:18.239 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:18.239 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:18.239 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:18.239 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:18.239 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:18.239 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:18.239 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:18.239 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:18.239 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:18.239 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:18.239 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:18.239 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:18.239 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:18.239 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:18.239 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:18.239 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:18.239 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:18.239 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:18.239 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:18.239 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:18.239 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:18.239 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:18.239 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:18.239 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:18.239 Cannot find device "nvmf_init_br" 00:18:18.239 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:18:18.239 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:18.239 Cannot find device "nvmf_init_br2" 00:18:18.239 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:18:18.239 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:18.239 Cannot find device "nvmf_tgt_br" 00:18:18.239 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:18:18.239 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:18.239 Cannot find device "nvmf_tgt_br2" 00:18:18.239 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:18:18.239 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:18.239 Cannot find device "nvmf_init_br" 00:18:18.239 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:18:18.239 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:18.239 Cannot find device "nvmf_init_br2" 00:18:18.239 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:18:18.239 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:18.239 Cannot find device "nvmf_tgt_br" 00:18:18.239 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:18:18.239 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:18.239 Cannot find device "nvmf_tgt_br2" 00:18:18.239 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:18:18.239 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:18.239 Cannot find device "nvmf_br" 00:18:18.239 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:18:18.239 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:18.239 Cannot find device "nvmf_init_if" 00:18:18.239 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:18:18.239 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:18.239 Cannot find device "nvmf_init_if2" 00:18:18.239 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:18:18.239 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:18.239 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:18.239 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:18:18.239 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:18.239 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:18.239 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:18:18.239 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:18.239 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:18.498 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:18.498 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:18.498 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:18.498 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:18.498 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:18.498 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:18.498 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:18.498 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:18.498 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:18.498 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:18.498 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:18.498 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:18.498 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:18.498 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:18.498 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:18.498 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:18.498 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:18.498 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:18.498 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:18.498 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:18.498 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:18.498 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:18.498 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:18.498 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:18.498 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:18.498 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:18.498 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:18.498 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:18.498 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:18.498 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:18.757 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:18.757 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:18.757 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.107 ms 00:18:18.757 00:18:18.757 --- 10.0.0.3 ping statistics --- 00:18:18.757 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:18.757 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:18:18.757 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:18.757 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:18.757 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.075 ms 00:18:18.757 00:18:18.757 --- 10.0.0.4 ping statistics --- 00:18:18.757 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:18.757 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:18:18.757 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:18.757 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:18.757 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:18:18.757 00:18:18.757 --- 10.0.0.1 ping statistics --- 00:18:18.757 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:18.757 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:18:18.757 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:18.757 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:18.757 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.123 ms 00:18:18.757 00:18:18.757 --- 10.0.0.2 ping statistics --- 00:18:18.757 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:18.757 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:18:18.757 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:18.757 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@461 -- # return 0 00:18:18.757 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:18.757 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:18.757 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:18.757 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:18.757 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:18.757 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:18.757 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:18.757 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:18:18.757 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:18.757 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:18.757 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:18.757 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=74532 00:18:18.757 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:18:18.757 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 74532 00:18:18.757 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 74532 ']' 00:18:18.757 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:18.757 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:18.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:18.757 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:18.757 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:18.757 14:22:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:18.757 [2024-11-06 14:22:46.322659] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:18:18.757 [2024-11-06 14:22:46.322780] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:19.016 [2024-11-06 14:22:46.512408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.016 [2024-11-06 14:22:46.631793] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:19.016 [2024-11-06 14:22:46.631894] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:19.016 [2024-11-06 14:22:46.631911] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:19.016 [2024-11-06 14:22:46.631934] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:19.016 [2024-11-06 14:22:46.631948] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:19.016 [2024-11-06 14:22:46.633311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:19.583 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:19.583 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:19.583 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:19.583 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:19.583 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:19.583 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:19.583 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:18:19.583 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:18:19.842 true 00:18:19.842 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:19.842 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:18:20.102 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:18:20.102 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:18:20.102 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:20.361 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:20.361 14:22:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:18:20.620 14:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:18:20.620 14:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:18:20.620 14:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:18:20.879 14:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:20.879 14:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:18:21.138 14:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:18:21.138 14:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:18:21.138 14:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:21.138 14:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:18:21.138 14:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:18:21.138 14:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:18:21.138 14:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:18:21.396 14:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:21.396 14:22:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:18:21.655 14:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:18:21.655 14:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:18:21.655 14:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:18:21.914 14:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:21.914 14:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:18:22.173 14:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:18:22.173 14:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:18:22.173 14:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:18:22.173 14:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:18:22.173 14:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:22.173 14:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:22.173 14:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:18:22.173 14:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:18:22.173 14:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:22.173 14:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:22.173 14:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:18:22.173 14:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:18:22.173 14:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:22.173 14:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:22.173 14:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:18:22.173 14:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:18:22.173 14:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:22.173 14:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:22.173 14:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:18:22.173 14:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.Hc9ZNPRISj 00:18:22.173 14:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:18:22.173 14:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.bNkjGCrbb9 00:18:22.173 14:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:22.173 14:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:22.173 14:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.Hc9ZNPRISj 00:18:22.173 14:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.bNkjGCrbb9 00:18:22.173 14:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:22.432 14:22:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:18:23.000 [2024-11-06 14:22:50.325582] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:23.000 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.Hc9ZNPRISj 00:18:23.000 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Hc9ZNPRISj 00:18:23.000 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:23.260 [2024-11-06 14:22:50.691275] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:23.260 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:23.519 14:22:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:18:23.519 [2024-11-06 14:22:51.110722] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:23.519 [2024-11-06 14:22:51.111102] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:23.519 14:22:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:23.778 malloc0 00:18:23.778 14:22:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:24.037 14:22:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Hc9ZNPRISj 00:18:24.296 14:22:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:24.554 14:22:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.Hc9ZNPRISj 00:18:36.763 Initializing NVMe Controllers 00:18:36.763 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:36.763 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:36.763 Initialization complete. Launching workers. 00:18:36.763 ======================================================== 00:18:36.763 Latency(us) 00:18:36.763 Device Information : IOPS MiB/s Average min max 00:18:36.763 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9733.66 38.02 6576.51 1632.03 19493.55 00:18:36.763 ======================================================== 00:18:36.763 Total : 9733.66 38.02 6576.51 1632.03 19493.55 00:18:36.763 00:18:36.763 14:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Hc9ZNPRISj 00:18:36.763 14:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:36.763 14:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:36.763 14:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:36.763 14:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Hc9ZNPRISj 00:18:36.763 14:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:36.763 14:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=74772 00:18:36.763 14:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:36.763 14:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 74772 /var/tmp/bdevperf.sock 00:18:36.763 14:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 74772 ']' 00:18:36.763 14:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:36.763 14:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:36.763 14:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:36.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:36.763 14:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:36.763 14:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:36.763 14:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:36.763 [2024-11-06 14:23:02.479158] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:18:36.763 [2024-11-06 14:23:02.479282] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74772 ] 00:18:36.763 [2024-11-06 14:23:02.660519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:36.763 [2024-11-06 14:23:02.802777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:36.763 [2024-11-06 14:23:03.038905] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:36.763 14:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:36.763 14:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:36.763 14:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Hc9ZNPRISj 00:18:36.763 14:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:36.763 [2024-11-06 14:23:03.726786] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:36.763 TLSTESTn1 00:18:36.763 14:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:36.763 Running I/O for 10 seconds... 00:18:38.709 4071.00 IOPS, 15.90 MiB/s [2024-11-06T14:23:07.281Z] 4059.00 IOPS, 15.86 MiB/s [2024-11-06T14:23:08.217Z] 4052.33 IOPS, 15.83 MiB/s [2024-11-06T14:23:09.188Z] 4062.50 IOPS, 15.87 MiB/s [2024-11-06T14:23:10.125Z] 4066.60 IOPS, 15.89 MiB/s [2024-11-06T14:23:11.064Z] 4070.83 IOPS, 15.90 MiB/s [2024-11-06T14:23:11.999Z] 4073.14 IOPS, 15.91 MiB/s [2024-11-06T14:23:13.379Z] 4075.38 IOPS, 15.92 MiB/s [2024-11-06T14:23:13.947Z] 4071.44 IOPS, 15.90 MiB/s [2024-11-06T14:23:14.207Z] 4076.40 IOPS, 15.92 MiB/s 00:18:46.572 Latency(us) 00:18:46.572 [2024-11-06T14:23:14.207Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:46.572 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:46.572 Verification LBA range: start 0x0 length 0x2000 00:18:46.572 TLSTESTn1 : 10.02 4081.62 15.94 0.00 0.00 31307.50 6395.68 24319.38 00:18:46.572 [2024-11-06T14:23:14.207Z] =================================================================================================================== 00:18:46.572 [2024-11-06T14:23:14.207Z] Total : 4081.62 15.94 0.00 0.00 31307.50 6395.68 24319.38 00:18:46.572 { 00:18:46.572 "results": [ 00:18:46.572 { 00:18:46.572 "job": "TLSTESTn1", 00:18:46.572 "core_mask": "0x4", 00:18:46.572 "workload": "verify", 00:18:46.572 "status": "finished", 00:18:46.572 "verify_range": { 00:18:46.572 "start": 0, 00:18:46.572 "length": 8192 00:18:46.572 }, 00:18:46.572 "queue_depth": 128, 00:18:46.572 "io_size": 4096, 00:18:46.572 "runtime": 10.018073, 00:18:46.572 "iops": 4081.623282242004, 00:18:46.572 "mibps": 15.943840946257827, 00:18:46.572 "io_failed": 0, 00:18:46.572 "io_timeout": 0, 00:18:46.572 "avg_latency_us": 31307.49792034855, 00:18:46.572 "min_latency_us": 6395.681927710843, 00:18:46.572 "max_latency_us": 24319.38313253012 00:18:46.572 } 00:18:46.572 ], 00:18:46.572 "core_count": 1 00:18:46.572 } 00:18:46.572 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:46.572 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 74772 00:18:46.572 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 74772 ']' 00:18:46.572 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 74772 00:18:46.572 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:46.572 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:46.572 14:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74772 00:18:46.572 killing process with pid 74772 00:18:46.572 Received shutdown signal, test time was about 10.000000 seconds 00:18:46.572 00:18:46.572 Latency(us) 00:18:46.572 [2024-11-06T14:23:14.207Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:46.572 [2024-11-06T14:23:14.207Z] =================================================================================================================== 00:18:46.572 [2024-11-06T14:23:14.207Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:46.572 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:18:46.572 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:18:46.572 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74772' 00:18:46.572 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 74772 00:18:46.572 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 74772 00:18:47.951 14:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.bNkjGCrbb9 00:18:47.951 14:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:47.951 14:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.bNkjGCrbb9 00:18:47.951 14:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:18:47.951 14:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:47.952 14:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:18:47.952 14:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:47.952 14:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.bNkjGCrbb9 00:18:47.952 14:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:47.952 14:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:47.952 14:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:47.952 14:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.bNkjGCrbb9 00:18:47.952 14:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:47.952 14:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=74919 00:18:47.952 14:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:47.952 14:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:47.952 14:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 74919 /var/tmp/bdevperf.sock 00:18:47.952 14:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 74919 ']' 00:18:47.952 14:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:47.952 14:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:47.952 14:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:47.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:47.952 14:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:47.952 14:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:47.952 [2024-11-06 14:23:15.407455] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:18:47.952 [2024-11-06 14:23:15.407595] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74919 ] 00:18:48.211 [2024-11-06 14:23:15.591558] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:48.211 [2024-11-06 14:23:15.737869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:48.472 [2024-11-06 14:23:15.977694] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:48.769 14:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:48.769 14:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:48.769 14:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.bNkjGCrbb9 00:18:49.027 14:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:49.287 [2024-11-06 14:23:16.799580] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:49.287 [2024-11-06 14:23:16.808329] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:49.287 [2024-11-06 14:23:16.809160] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (107): Transport endpoint is not connected 00:18:49.287 [2024-11-06 14:23:16.810130] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:18:49.287 [2024-11-06 14:23:16.811117] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:18:49.287 [2024-11-06 14:23:16.811160] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:18:49.287 [2024-11-06 14:23:16.811178] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:49.287 [2024-11-06 14:23:16.811202] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:18:49.287 request: 00:18:49.287 { 00:18:49.287 "name": "TLSTEST", 00:18:49.287 "trtype": "tcp", 00:18:49.287 "traddr": "10.0.0.3", 00:18:49.287 "adrfam": "ipv4", 00:18:49.287 "trsvcid": "4420", 00:18:49.287 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:49.287 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:49.287 "prchk_reftag": false, 00:18:49.287 "prchk_guard": false, 00:18:49.287 "hdgst": false, 00:18:49.287 "ddgst": false, 00:18:49.287 "psk": "key0", 00:18:49.287 "allow_unrecognized_csi": false, 00:18:49.287 "method": "bdev_nvme_attach_controller", 00:18:49.287 "req_id": 1 00:18:49.287 } 00:18:49.287 Got JSON-RPC error response 00:18:49.287 response: 00:18:49.287 { 00:18:49.287 "code": -5, 00:18:49.287 "message": "Input/output error" 00:18:49.287 } 00:18:49.287 14:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 74919 00:18:49.287 14:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 74919 ']' 00:18:49.287 14:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 74919 00:18:49.287 14:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:49.287 14:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:49.287 14:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74919 00:18:49.287 killing process with pid 74919 00:18:49.287 Received shutdown signal, test time was about 10.000000 seconds 00:18:49.287 00:18:49.287 Latency(us) 00:18:49.287 [2024-11-06T14:23:16.922Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:49.287 [2024-11-06T14:23:16.922Z] =================================================================================================================== 00:18:49.287 [2024-11-06T14:23:16.922Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:49.287 14:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:18:49.287 14:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:18:49.287 14:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74919' 00:18:49.287 14:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 74919 00:18:49.287 14:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 74919 00:18:50.666 14:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:50.666 14:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:18:50.666 14:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:50.666 14:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:50.666 14:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:50.666 14:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Hc9ZNPRISj 00:18:50.666 14:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:50.666 14:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Hc9ZNPRISj 00:18:50.666 14:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:18:50.666 14:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:50.666 14:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:18:50.666 14:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:50.666 14:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Hc9ZNPRISj 00:18:50.666 14:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:50.666 14:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:50.666 14:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:18:50.666 14:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Hc9ZNPRISj 00:18:50.666 14:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:50.666 14:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=74954 00:18:50.666 14:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:50.666 14:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:50.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:50.666 14:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 74954 /var/tmp/bdevperf.sock 00:18:50.666 14:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 74954 ']' 00:18:50.666 14:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:50.666 14:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:50.666 14:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:50.666 14:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:50.666 14:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:50.666 [2024-11-06 14:23:18.095846] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:18:50.666 [2024-11-06 14:23:18.095988] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74954 ] 00:18:50.666 [2024-11-06 14:23:18.279414] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:50.924 [2024-11-06 14:23:18.429813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:51.183 [2024-11-06 14:23:18.672471] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:51.442 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:51.442 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:51.442 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Hc9ZNPRISj 00:18:51.701 14:23:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:18:51.960 [2024-11-06 14:23:19.400788] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:51.960 [2024-11-06 14:23:19.413626] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:51.960 [2024-11-06 14:23:19.413690] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:51.960 [2024-11-06 14:23:19.413782] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:51.960 [2024-11-06 14:23:19.414724] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (107): Transport endpoint is not connected 00:18:51.960 [2024-11-06 14:23:19.415688] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:18:51.960 [2024-11-06 14:23:19.416675] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:18:51.960 [2024-11-06 14:23:19.416724] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:18:51.960 [2024-11-06 14:23:19.416743] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:51.960 [2024-11-06 14:23:19.416767] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:18:51.960 request: 00:18:51.960 { 00:18:51.960 "name": "TLSTEST", 00:18:51.960 "trtype": "tcp", 00:18:51.960 "traddr": "10.0.0.3", 00:18:51.960 "adrfam": "ipv4", 00:18:51.960 "trsvcid": "4420", 00:18:51.960 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:51.960 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:51.960 "prchk_reftag": false, 00:18:51.960 "prchk_guard": false, 00:18:51.960 "hdgst": false, 00:18:51.960 "ddgst": false, 00:18:51.960 "psk": "key0", 00:18:51.960 "allow_unrecognized_csi": false, 00:18:51.960 "method": "bdev_nvme_attach_controller", 00:18:51.960 "req_id": 1 00:18:51.960 } 00:18:51.960 Got JSON-RPC error response 00:18:51.960 response: 00:18:51.960 { 00:18:51.960 "code": -5, 00:18:51.960 "message": "Input/output error" 00:18:51.960 } 00:18:51.960 14:23:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 74954 00:18:51.960 14:23:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 74954 ']' 00:18:51.960 14:23:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 74954 00:18:51.960 14:23:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:51.960 14:23:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:51.960 14:23:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74954 00:18:51.960 14:23:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:18:51.960 killing process with pid 74954 00:18:51.960 Received shutdown signal, test time was about 10.000000 seconds 00:18:51.960 00:18:51.960 Latency(us) 00:18:51.960 [2024-11-06T14:23:19.595Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:51.960 [2024-11-06T14:23:19.595Z] =================================================================================================================== 00:18:51.960 [2024-11-06T14:23:19.595Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:51.960 14:23:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:18:51.960 14:23:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74954' 00:18:51.960 14:23:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 74954 00:18:51.960 14:23:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 74954 00:18:53.360 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:53.360 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:18:53.360 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:53.360 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:53.360 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:53.360 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Hc9ZNPRISj 00:18:53.360 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:53.360 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Hc9ZNPRISj 00:18:53.360 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:18:53.360 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:53.360 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:18:53.360 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:53.360 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Hc9ZNPRISj 00:18:53.360 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:53.360 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:18:53.360 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:53.360 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Hc9ZNPRISj 00:18:53.360 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:53.360 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=75000 00:18:53.360 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:53.360 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:53.360 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 75000 /var/tmp/bdevperf.sock 00:18:53.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:53.360 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 75000 ']' 00:18:53.360 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:53.360 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:53.360 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:53.360 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:53.360 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:53.360 [2024-11-06 14:23:20.827934] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:18:53.360 [2024-11-06 14:23:20.828255] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75000 ] 00:18:53.619 [2024-11-06 14:23:21.012679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.619 [2024-11-06 14:23:21.161844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:53.878 [2024-11-06 14:23:21.410063] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:54.137 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:54.137 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:54.137 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Hc9ZNPRISj 00:18:54.396 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:54.655 [2024-11-06 14:23:22.080047] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:54.655 [2024-11-06 14:23:22.089773] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:54.655 [2024-11-06 14:23:22.089847] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:54.655 [2024-11-06 14:23:22.089930] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:54.655 [2024-11-06 14:23:22.090773] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (107): Transport endpoint is not connected 00:18:54.655 [2024-11-06 14:23:22.091738] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:18:54.656 request: 00:18:54.656 { 00:18:54.656 "name": "TLSTEST", 00:18:54.656 "trtype": "tcp", 00:18:54.656 "traddr": "10.0.0.3", 00:18:54.656 "adrfam": "ipv4", 00:18:54.656 "trsvcid": "4420", 00:18:54.656 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:54.656 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:54.656 "prchk_reftag": false, 00:18:54.656 "prchk_guard": false, 00:18:54.656 "hdgst": false, 00:18:54.656 "ddgst": false, 00:18:54.656 "psk": "key0", 00:18:54.656 "allow_unrecognized_csi": false, 00:18:54.656 "method": "bdev_nvme_attach_controller", 00:18:54.656 "req_id": 1 00:18:54.656 } 00:18:54.656 Got JSON-RPC error response 00:18:54.656 response: 00:18:54.656 { 00:18:54.656 "code": -5, 00:18:54.656 "message": "Input/output error" 00:18:54.656 } 00:18:54.656 [2024-11-06 14:23:22.092733] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:18:54.656 [2024-11-06 14:23:22.092773] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:18:54.656 [2024-11-06 14:23:22.092796] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:18:54.656 [2024-11-06 14:23:22.092815] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:18:54.656 14:23:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 75000 00:18:54.656 14:23:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 75000 ']' 00:18:54.656 14:23:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 75000 00:18:54.656 14:23:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:54.656 14:23:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:54.656 14:23:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75000 00:18:54.656 14:23:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:18:54.656 14:23:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:18:54.656 killing process with pid 75000 00:18:54.656 Received shutdown signal, test time was about 10.000000 seconds 00:18:54.656 00:18:54.656 Latency(us) 00:18:54.656 [2024-11-06T14:23:22.291Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:54.656 [2024-11-06T14:23:22.291Z] =================================================================================================================== 00:18:54.656 [2024-11-06T14:23:22.291Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:54.656 14:23:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75000' 00:18:54.656 14:23:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 75000 00:18:54.656 14:23:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 75000 00:18:56.037 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:56.037 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:18:56.037 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:56.037 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:56.037 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:56.037 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:56.037 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:56.037 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:56.037 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:18:56.037 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:56.037 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:18:56.037 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:56.037 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:56.037 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:56.037 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:56.037 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:56.037 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:18:56.037 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:56.037 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=75035 00:18:56.037 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:56.037 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:56.037 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 75035 /var/tmp/bdevperf.sock 00:18:56.037 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 75035 ']' 00:18:56.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:56.037 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:56.037 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:56.038 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:56.038 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:56.038 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:56.038 [2024-11-06 14:23:23.359390] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:18:56.038 [2024-11-06 14:23:23.359519] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75035 ] 00:18:56.038 [2024-11-06 14:23:23.543900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:56.300 [2024-11-06 14:23:23.685862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:56.300 [2024-11-06 14:23:23.933373] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:56.868 14:23:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:56.868 14:23:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:56.868 14:23:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:18:56.868 [2024-11-06 14:23:24.389613] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:18:56.868 [2024-11-06 14:23:24.389673] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:56.868 request: 00:18:56.868 { 00:18:56.868 "name": "key0", 00:18:56.868 "path": "", 00:18:56.868 "method": "keyring_file_add_key", 00:18:56.868 "req_id": 1 00:18:56.868 } 00:18:56.868 Got JSON-RPC error response 00:18:56.868 response: 00:18:56.868 { 00:18:56.868 "code": -1, 00:18:56.868 "message": "Operation not permitted" 00:18:56.868 } 00:18:56.868 14:23:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:57.128 [2024-11-06 14:23:24.613533] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:57.128 [2024-11-06 14:23:24.613640] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:18:57.128 request: 00:18:57.128 { 00:18:57.128 "name": "TLSTEST", 00:18:57.128 "trtype": "tcp", 00:18:57.128 "traddr": "10.0.0.3", 00:18:57.128 "adrfam": "ipv4", 00:18:57.128 "trsvcid": "4420", 00:18:57.128 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:57.128 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:57.128 "prchk_reftag": false, 00:18:57.128 "prchk_guard": false, 00:18:57.128 "hdgst": false, 00:18:57.128 "ddgst": false, 00:18:57.128 "psk": "key0", 00:18:57.128 "allow_unrecognized_csi": false, 00:18:57.128 "method": "bdev_nvme_attach_controller", 00:18:57.128 "req_id": 1 00:18:57.128 } 00:18:57.128 Got JSON-RPC error response 00:18:57.128 response: 00:18:57.128 { 00:18:57.128 "code": -126, 00:18:57.128 "message": "Required key not available" 00:18:57.128 } 00:18:57.128 14:23:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 75035 00:18:57.128 14:23:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 75035 ']' 00:18:57.128 14:23:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 75035 00:18:57.128 14:23:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:57.128 14:23:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:57.128 14:23:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75035 00:18:57.128 14:23:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:18:57.128 14:23:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:18:57.128 14:23:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75035' 00:18:57.128 killing process with pid 75035 00:18:57.128 Received shutdown signal, test time was about 10.000000 seconds 00:18:57.128 00:18:57.128 Latency(us) 00:18:57.128 [2024-11-06T14:23:24.763Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:57.128 [2024-11-06T14:23:24.763Z] =================================================================================================================== 00:18:57.128 [2024-11-06T14:23:24.763Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:57.128 14:23:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 75035 00:18:57.128 14:23:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 75035 00:18:58.507 14:23:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:58.507 14:23:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:18:58.507 14:23:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:58.507 14:23:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:58.507 14:23:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:58.507 14:23:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 74532 00:18:58.507 14:23:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 74532 ']' 00:18:58.507 14:23:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 74532 00:18:58.507 14:23:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:58.507 14:23:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:58.507 14:23:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74532 00:18:58.507 killing process with pid 74532 00:18:58.507 14:23:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:58.507 14:23:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:58.507 14:23:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74532' 00:18:58.507 14:23:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 74532 00:18:58.507 14:23:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 74532 00:18:59.886 14:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:18:59.886 14:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:18:59.886 14:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:59.886 14:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:59.886 14:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:18:59.886 14:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:18:59.886 14:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:59.886 14:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:59.886 14:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:18:59.886 14:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.jaVnDvSk3q 00:18:59.886 14:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:59.886 14:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.jaVnDvSk3q 00:18:59.886 14:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:18:59.886 14:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:59.886 14:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:59.886 14:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:59.886 14:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=75098 00:18:59.886 14:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:59.886 14:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 75098 00:18:59.886 14:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 75098 ']' 00:18:59.886 14:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:59.886 14:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:59.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:59.886 14:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:59.886 14:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:59.886 14:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:59.886 [2024-11-06 14:23:27.509499] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:18:59.886 [2024-11-06 14:23:27.509633] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:00.146 [2024-11-06 14:23:27.696214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:00.405 [2024-11-06 14:23:27.817102] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:00.405 [2024-11-06 14:23:27.817174] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:00.405 [2024-11-06 14:23:27.817191] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:00.405 [2024-11-06 14:23:27.817214] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:00.405 [2024-11-06 14:23:27.817228] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:00.405 [2024-11-06 14:23:27.818555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:00.664 [2024-11-06 14:23:28.048156] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:00.923 14:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:00.924 14:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:00.924 14:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:00.924 14:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:00.924 14:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:00.924 14:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:00.924 14:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.jaVnDvSk3q 00:19:00.924 14:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.jaVnDvSk3q 00:19:00.924 14:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:01.182 [2024-11-06 14:23:28.606918] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:01.182 14:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:01.441 14:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:19:01.441 [2024-11-06 14:23:29.054690] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:01.441 [2024-11-06 14:23:29.055105] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:01.441 14:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:01.700 malloc0 00:19:01.700 14:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:01.969 14:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.jaVnDvSk3q 00:19:02.233 14:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:02.493 14:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.jaVnDvSk3q 00:19:02.493 14:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:02.493 14:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:02.493 14:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:02.493 14:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.jaVnDvSk3q 00:19:02.493 14:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:02.493 14:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=75159 00:19:02.493 14:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:02.493 14:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:02.493 14:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 75159 /var/tmp/bdevperf.sock 00:19:02.493 14:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 75159 ']' 00:19:02.493 14:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:02.493 14:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:02.493 14:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:02.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:02.493 14:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:02.493 14:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:02.493 [2024-11-06 14:23:30.079877] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:19:02.493 [2024-11-06 14:23:30.080043] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75159 ] 00:19:02.752 [2024-11-06 14:23:30.270450] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:03.010 [2024-11-06 14:23:30.421230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:03.269 [2024-11-06 14:23:30.670479] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:03.528 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:03.528 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:03.528 14:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.jaVnDvSk3q 00:19:03.788 14:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:03.788 [2024-11-06 14:23:31.397653] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:04.047 TLSTESTn1 00:19:04.047 14:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:04.047 Running I/O for 10 seconds... 00:19:06.361 3792.00 IOPS, 14.81 MiB/s [2024-11-06T14:23:34.931Z] 3915.50 IOPS, 15.29 MiB/s [2024-11-06T14:23:35.867Z] 3950.67 IOPS, 15.43 MiB/s [2024-11-06T14:23:36.804Z] 3904.75 IOPS, 15.25 MiB/s [2024-11-06T14:23:37.771Z] 3904.00 IOPS, 15.25 MiB/s [2024-11-06T14:23:38.708Z] 3928.17 IOPS, 15.34 MiB/s [2024-11-06T14:23:39.645Z] 3948.86 IOPS, 15.43 MiB/s [2024-11-06T14:23:41.021Z] 3951.00 IOPS, 15.43 MiB/s [2024-11-06T14:23:41.959Z] 3957.00 IOPS, 15.46 MiB/s [2024-11-06T14:23:41.959Z] 3964.90 IOPS, 15.49 MiB/s 00:19:14.324 Latency(us) 00:19:14.324 [2024-11-06T14:23:41.959Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:14.324 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:14.324 Verification LBA range: start 0x0 length 0x2000 00:19:14.324 TLSTESTn1 : 10.02 3970.93 15.51 0.00 0.00 32180.97 6369.36 35584.21 00:19:14.324 [2024-11-06T14:23:41.959Z] =================================================================================================================== 00:19:14.324 [2024-11-06T14:23:41.959Z] Total : 3970.93 15.51 0.00 0.00 32180.97 6369.36 35584.21 00:19:14.324 { 00:19:14.324 "results": [ 00:19:14.324 { 00:19:14.324 "job": "TLSTESTn1", 00:19:14.324 "core_mask": "0x4", 00:19:14.324 "workload": "verify", 00:19:14.324 "status": "finished", 00:19:14.324 "verify_range": { 00:19:14.324 "start": 0, 00:19:14.324 "length": 8192 00:19:14.324 }, 00:19:14.324 "queue_depth": 128, 00:19:14.324 "io_size": 4096, 00:19:14.324 "runtime": 10.016809, 00:19:14.324 "iops": 3970.9252717107815, 00:19:14.324 "mibps": 15.51142684262024, 00:19:14.324 "io_failed": 0, 00:19:14.324 "io_timeout": 0, 00:19:14.324 "avg_latency_us": 32180.97310109303, 00:19:14.324 "min_latency_us": 6369.362248995984, 00:19:14.324 "max_latency_us": 35584.20562248996 00:19:14.324 } 00:19:14.324 ], 00:19:14.324 "core_count": 1 00:19:14.324 } 00:19:14.324 14:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:14.324 14:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 75159 00:19:14.324 14:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 75159 ']' 00:19:14.324 14:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 75159 00:19:14.324 14:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:14.324 14:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:14.324 14:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75159 00:19:14.324 killing process with pid 75159 00:19:14.324 Received shutdown signal, test time was about 10.000000 seconds 00:19:14.324 00:19:14.324 Latency(us) 00:19:14.324 [2024-11-06T14:23:41.959Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:14.324 [2024-11-06T14:23:41.959Z] =================================================================================================================== 00:19:14.324 [2024-11-06T14:23:41.959Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:14.324 14:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:14.324 14:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:14.324 14:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75159' 00:19:14.324 14:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 75159 00:19:14.324 14:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 75159 00:19:15.261 14:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.jaVnDvSk3q 00:19:15.261 14:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.jaVnDvSk3q 00:19:15.261 14:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:15.261 14:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.jaVnDvSk3q 00:19:15.261 14:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:15.261 14:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:15.261 14:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:15.261 14:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:15.261 14:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.jaVnDvSk3q 00:19:15.261 14:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:15.261 14:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:15.261 14:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:15.261 14:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.jaVnDvSk3q 00:19:15.261 14:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:15.261 14:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=75301 00:19:15.261 14:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:15.261 14:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:15.261 14:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 75301 /var/tmp/bdevperf.sock 00:19:15.261 14:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 75301 ']' 00:19:15.261 14:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:15.261 14:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:15.261 14:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:15.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:15.261 14:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:15.261 14:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:15.520 [2024-11-06 14:23:42.904023] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:19:15.520 [2024-11-06 14:23:42.904356] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75301 ] 00:19:15.520 [2024-11-06 14:23:43.080931] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:15.779 [2024-11-06 14:23:43.218697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:16.038 [2024-11-06 14:23:43.460155] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:16.297 14:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:16.297 14:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:16.297 14:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.jaVnDvSk3q 00:19:16.557 [2024-11-06 14:23:43.942386] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.jaVnDvSk3q': 0100666 00:19:16.557 [2024-11-06 14:23:43.942639] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:16.557 request: 00:19:16.557 { 00:19:16.557 "name": "key0", 00:19:16.557 "path": "/tmp/tmp.jaVnDvSk3q", 00:19:16.557 "method": "keyring_file_add_key", 00:19:16.557 "req_id": 1 00:19:16.557 } 00:19:16.557 Got JSON-RPC error response 00:19:16.557 response: 00:19:16.557 { 00:19:16.557 "code": -1, 00:19:16.557 "message": "Operation not permitted" 00:19:16.557 } 00:19:16.557 14:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:16.557 [2024-11-06 14:23:44.134262] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:16.557 [2024-11-06 14:23:44.134330] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:16.557 request: 00:19:16.557 { 00:19:16.557 "name": "TLSTEST", 00:19:16.557 "trtype": "tcp", 00:19:16.557 "traddr": "10.0.0.3", 00:19:16.557 "adrfam": "ipv4", 00:19:16.557 "trsvcid": "4420", 00:19:16.557 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:16.557 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:16.557 "prchk_reftag": false, 00:19:16.557 "prchk_guard": false, 00:19:16.557 "hdgst": false, 00:19:16.557 "ddgst": false, 00:19:16.557 "psk": "key0", 00:19:16.557 "allow_unrecognized_csi": false, 00:19:16.557 "method": "bdev_nvme_attach_controller", 00:19:16.557 "req_id": 1 00:19:16.557 } 00:19:16.557 Got JSON-RPC error response 00:19:16.557 response: 00:19:16.557 { 00:19:16.557 "code": -126, 00:19:16.557 "message": "Required key not available" 00:19:16.557 } 00:19:16.557 14:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 75301 00:19:16.557 14:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 75301 ']' 00:19:16.557 14:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 75301 00:19:16.557 14:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:16.557 14:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:16.557 14:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75301 00:19:16.816 killing process with pid 75301 00:19:16.816 Received shutdown signal, test time was about 10.000000 seconds 00:19:16.816 00:19:16.816 Latency(us) 00:19:16.816 [2024-11-06T14:23:44.451Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:16.816 [2024-11-06T14:23:44.451Z] =================================================================================================================== 00:19:16.816 [2024-11-06T14:23:44.451Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:16.816 14:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:16.816 14:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:16.816 14:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75301' 00:19:16.816 14:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 75301 00:19:16.816 14:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 75301 00:19:18.194 14:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:18.194 14:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:18.194 14:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:18.194 14:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:18.194 14:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:18.194 14:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 75098 00:19:18.194 14:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 75098 ']' 00:19:18.194 14:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 75098 00:19:18.194 14:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:18.194 14:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:18.194 14:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75098 00:19:18.194 14:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:18.194 14:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:18.194 14:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75098' 00:19:18.194 killing process with pid 75098 00:19:18.194 14:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 75098 00:19:18.194 14:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 75098 00:19:19.131 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:19:19.131 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:19.131 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:19.131 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:19.131 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:19.131 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=75360 00:19:19.131 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 75360 00:19:19.131 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 75360 ']' 00:19:19.131 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:19.131 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:19.131 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:19.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:19.131 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:19.131 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:19.390 [2024-11-06 14:23:46.865586] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:19:19.390 [2024-11-06 14:23:46.865723] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:19.648 [2024-11-06 14:23:47.056048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:19.648 [2024-11-06 14:23:47.176043] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:19.648 [2024-11-06 14:23:47.176112] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:19.648 [2024-11-06 14:23:47.176130] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:19.648 [2024-11-06 14:23:47.176151] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:19.648 [2024-11-06 14:23:47.176165] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:19.648 [2024-11-06 14:23:47.177425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:19.906 [2024-11-06 14:23:47.401962] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:20.171 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:20.171 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:20.171 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:20.171 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:20.171 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:20.431 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:20.431 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.jaVnDvSk3q 00:19:20.431 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:20.431 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.jaVnDvSk3q 00:19:20.431 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:19:20.431 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:20.431 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:19:20.431 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:20.431 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.jaVnDvSk3q 00:19:20.431 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.jaVnDvSk3q 00:19:20.431 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:20.431 [2024-11-06 14:23:48.063608] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:20.691 14:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:20.950 14:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:19:20.950 [2024-11-06 14:23:48.531238] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:20.950 [2024-11-06 14:23:48.532059] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:20.950 14:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:21.209 malloc0 00:19:21.209 14:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:21.469 14:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.jaVnDvSk3q 00:19:21.728 [2024-11-06 14:23:49.176731] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.jaVnDvSk3q': 0100666 00:19:21.728 [2024-11-06 14:23:49.176803] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:21.728 request: 00:19:21.728 { 00:19:21.728 "name": "key0", 00:19:21.728 "path": "/tmp/tmp.jaVnDvSk3q", 00:19:21.728 "method": "keyring_file_add_key", 00:19:21.728 "req_id": 1 00:19:21.728 } 00:19:21.728 Got JSON-RPC error response 00:19:21.728 response: 00:19:21.728 { 00:19:21.728 "code": -1, 00:19:21.728 "message": "Operation not permitted" 00:19:21.728 } 00:19:21.728 14:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:21.988 [2024-11-06 14:23:49.416470] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:19:21.988 [2024-11-06 14:23:49.416552] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:19:21.988 request: 00:19:21.988 { 00:19:21.988 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:21.988 "host": "nqn.2016-06.io.spdk:host1", 00:19:21.988 "psk": "key0", 00:19:21.988 "method": "nvmf_subsystem_add_host", 00:19:21.988 "req_id": 1 00:19:21.988 } 00:19:21.988 Got JSON-RPC error response 00:19:21.988 response: 00:19:21.988 { 00:19:21.988 "code": -32603, 00:19:21.988 "message": "Internal error" 00:19:21.988 } 00:19:21.988 14:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:21.988 14:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:21.988 14:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:21.988 14:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:21.988 14:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 75360 00:19:21.988 14:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 75360 ']' 00:19:21.988 14:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 75360 00:19:21.988 14:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:21.988 14:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:21.988 14:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75360 00:19:21.988 killing process with pid 75360 00:19:21.988 14:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:21.988 14:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:21.988 14:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75360' 00:19:21.988 14:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 75360 00:19:21.988 14:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 75360 00:19:23.373 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.jaVnDvSk3q 00:19:23.374 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:19:23.374 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:23.374 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:23.374 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:23.374 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:23.374 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=75441 00:19:23.374 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 75441 00:19:23.374 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 75441 ']' 00:19:23.374 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:23.374 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:23.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:23.374 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:23.374 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:23.374 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:23.374 [2024-11-06 14:23:50.896685] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:19:23.374 [2024-11-06 14:23:50.896816] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:23.633 [2024-11-06 14:23:51.086377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:23.633 [2024-11-06 14:23:51.240055] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:23.633 [2024-11-06 14:23:51.240118] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:23.633 [2024-11-06 14:23:51.240135] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:23.633 [2024-11-06 14:23:51.240157] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:23.633 [2024-11-06 14:23:51.240170] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:23.633 [2024-11-06 14:23:51.241621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:23.892 [2024-11-06 14:23:51.498037] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:24.150 14:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:24.150 14:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:24.150 14:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:24.150 14:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:24.150 14:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:24.409 14:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:24.409 14:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.jaVnDvSk3q 00:19:24.410 14:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.jaVnDvSk3q 00:19:24.410 14:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:24.410 [2024-11-06 14:23:51.995192] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:24.410 14:23:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:24.668 14:23:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:19:24.927 [2024-11-06 14:23:52.414708] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:24.927 [2024-11-06 14:23:52.415264] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:24.927 14:23:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:25.186 malloc0 00:19:25.186 14:23:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:25.444 14:23:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.jaVnDvSk3q 00:19:25.703 14:23:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:25.962 14:23:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:25.962 14:23:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=75497 00:19:25.962 14:23:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:25.962 14:23:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 75497 /var/tmp/bdevperf.sock 00:19:25.962 14:23:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 75497 ']' 00:19:25.962 14:23:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:25.962 14:23:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:25.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:25.962 14:23:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:25.962 14:23:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:25.962 14:23:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:25.962 [2024-11-06 14:23:53.502844] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:19:25.962 [2024-11-06 14:23:53.502992] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75497 ] 00:19:26.221 [2024-11-06 14:23:53.686478] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:26.221 [2024-11-06 14:23:53.837640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:26.479 [2024-11-06 14:23:54.080335] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:26.738 14:23:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:26.738 14:23:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:26.738 14:23:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.jaVnDvSk3q 00:19:26.997 14:23:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:27.302 [2024-11-06 14:23:54.787826] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:27.302 TLSTESTn1 00:19:27.302 14:23:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:19:27.906 14:23:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:19:27.906 "subsystems": [ 00:19:27.906 { 00:19:27.906 "subsystem": "keyring", 00:19:27.906 "config": [ 00:19:27.906 { 00:19:27.906 "method": "keyring_file_add_key", 00:19:27.906 "params": { 00:19:27.906 "name": "key0", 00:19:27.906 "path": "/tmp/tmp.jaVnDvSk3q" 00:19:27.906 } 00:19:27.906 } 00:19:27.906 ] 00:19:27.906 }, 00:19:27.906 { 00:19:27.906 "subsystem": "iobuf", 00:19:27.906 "config": [ 00:19:27.906 { 00:19:27.906 "method": "iobuf_set_options", 00:19:27.906 "params": { 00:19:27.906 "small_pool_count": 8192, 00:19:27.906 "large_pool_count": 1024, 00:19:27.906 "small_bufsize": 8192, 00:19:27.906 "large_bufsize": 135168, 00:19:27.906 "enable_numa": false 00:19:27.906 } 00:19:27.906 } 00:19:27.906 ] 00:19:27.906 }, 00:19:27.906 { 00:19:27.906 "subsystem": "sock", 00:19:27.906 "config": [ 00:19:27.906 { 00:19:27.906 "method": "sock_set_default_impl", 00:19:27.906 "params": { 00:19:27.906 "impl_name": "uring" 00:19:27.906 } 00:19:27.906 }, 00:19:27.906 { 00:19:27.906 "method": "sock_impl_set_options", 00:19:27.906 "params": { 00:19:27.906 "impl_name": "ssl", 00:19:27.906 "recv_buf_size": 4096, 00:19:27.906 "send_buf_size": 4096, 00:19:27.906 "enable_recv_pipe": true, 00:19:27.906 "enable_quickack": false, 00:19:27.906 "enable_placement_id": 0, 00:19:27.906 "enable_zerocopy_send_server": true, 00:19:27.906 "enable_zerocopy_send_client": false, 00:19:27.906 "zerocopy_threshold": 0, 00:19:27.906 "tls_version": 0, 00:19:27.906 "enable_ktls": false 00:19:27.906 } 00:19:27.906 }, 00:19:27.906 { 00:19:27.906 "method": "sock_impl_set_options", 00:19:27.906 "params": { 00:19:27.906 "impl_name": "posix", 00:19:27.906 "recv_buf_size": 2097152, 00:19:27.906 "send_buf_size": 2097152, 00:19:27.906 "enable_recv_pipe": true, 00:19:27.906 "enable_quickack": false, 00:19:27.906 "enable_placement_id": 0, 00:19:27.906 "enable_zerocopy_send_server": true, 00:19:27.906 "enable_zerocopy_send_client": false, 00:19:27.906 "zerocopy_threshold": 0, 00:19:27.906 "tls_version": 0, 00:19:27.906 "enable_ktls": false 00:19:27.906 } 00:19:27.906 }, 00:19:27.906 { 00:19:27.906 "method": "sock_impl_set_options", 00:19:27.906 "params": { 00:19:27.906 "impl_name": "uring", 00:19:27.906 "recv_buf_size": 2097152, 00:19:27.906 "send_buf_size": 2097152, 00:19:27.906 "enable_recv_pipe": true, 00:19:27.906 "enable_quickack": false, 00:19:27.906 "enable_placement_id": 0, 00:19:27.906 "enable_zerocopy_send_server": false, 00:19:27.906 "enable_zerocopy_send_client": false, 00:19:27.906 "zerocopy_threshold": 0, 00:19:27.906 "tls_version": 0, 00:19:27.906 "enable_ktls": false 00:19:27.906 } 00:19:27.906 } 00:19:27.906 ] 00:19:27.906 }, 00:19:27.906 { 00:19:27.906 "subsystem": "vmd", 00:19:27.906 "config": [] 00:19:27.906 }, 00:19:27.906 { 00:19:27.906 "subsystem": "accel", 00:19:27.906 "config": [ 00:19:27.906 { 00:19:27.906 "method": "accel_set_options", 00:19:27.906 "params": { 00:19:27.906 "small_cache_size": 128, 00:19:27.906 "large_cache_size": 16, 00:19:27.906 "task_count": 2048, 00:19:27.906 "sequence_count": 2048, 00:19:27.906 "buf_count": 2048 00:19:27.906 } 00:19:27.906 } 00:19:27.906 ] 00:19:27.906 }, 00:19:27.906 { 00:19:27.906 "subsystem": "bdev", 00:19:27.906 "config": [ 00:19:27.906 { 00:19:27.906 "method": "bdev_set_options", 00:19:27.906 "params": { 00:19:27.906 "bdev_io_pool_size": 65535, 00:19:27.906 "bdev_io_cache_size": 256, 00:19:27.906 "bdev_auto_examine": true, 00:19:27.906 "iobuf_small_cache_size": 128, 00:19:27.906 "iobuf_large_cache_size": 16 00:19:27.906 } 00:19:27.906 }, 00:19:27.906 { 00:19:27.906 "method": "bdev_raid_set_options", 00:19:27.906 "params": { 00:19:27.906 "process_window_size_kb": 1024, 00:19:27.906 "process_max_bandwidth_mb_sec": 0 00:19:27.906 } 00:19:27.906 }, 00:19:27.906 { 00:19:27.906 "method": "bdev_iscsi_set_options", 00:19:27.906 "params": { 00:19:27.906 "timeout_sec": 30 00:19:27.906 } 00:19:27.906 }, 00:19:27.906 { 00:19:27.906 "method": "bdev_nvme_set_options", 00:19:27.906 "params": { 00:19:27.906 "action_on_timeout": "none", 00:19:27.906 "timeout_us": 0, 00:19:27.906 "timeout_admin_us": 0, 00:19:27.906 "keep_alive_timeout_ms": 10000, 00:19:27.906 "arbitration_burst": 0, 00:19:27.906 "low_priority_weight": 0, 00:19:27.906 "medium_priority_weight": 0, 00:19:27.906 "high_priority_weight": 0, 00:19:27.906 "nvme_adminq_poll_period_us": 10000, 00:19:27.906 "nvme_ioq_poll_period_us": 0, 00:19:27.906 "io_queue_requests": 0, 00:19:27.906 "delay_cmd_submit": true, 00:19:27.906 "transport_retry_count": 4, 00:19:27.906 "bdev_retry_count": 3, 00:19:27.906 "transport_ack_timeout": 0, 00:19:27.906 "ctrlr_loss_timeout_sec": 0, 00:19:27.906 "reconnect_delay_sec": 0, 00:19:27.906 "fast_io_fail_timeout_sec": 0, 00:19:27.906 "disable_auto_failback": false, 00:19:27.906 "generate_uuids": false, 00:19:27.906 "transport_tos": 0, 00:19:27.906 "nvme_error_stat": false, 00:19:27.906 "rdma_srq_size": 0, 00:19:27.906 "io_path_stat": false, 00:19:27.906 "allow_accel_sequence": false, 00:19:27.906 "rdma_max_cq_size": 0, 00:19:27.906 "rdma_cm_event_timeout_ms": 0, 00:19:27.906 "dhchap_digests": [ 00:19:27.906 "sha256", 00:19:27.906 "sha384", 00:19:27.906 "sha512" 00:19:27.906 ], 00:19:27.906 "dhchap_dhgroups": [ 00:19:27.906 "null", 00:19:27.906 "ffdhe2048", 00:19:27.906 "ffdhe3072", 00:19:27.906 "ffdhe4096", 00:19:27.906 "ffdhe6144", 00:19:27.906 "ffdhe8192" 00:19:27.906 ] 00:19:27.906 } 00:19:27.906 }, 00:19:27.906 { 00:19:27.906 "method": "bdev_nvme_set_hotplug", 00:19:27.906 "params": { 00:19:27.906 "period_us": 100000, 00:19:27.906 "enable": false 00:19:27.906 } 00:19:27.906 }, 00:19:27.906 { 00:19:27.906 "method": "bdev_malloc_create", 00:19:27.906 "params": { 00:19:27.906 "name": "malloc0", 00:19:27.906 "num_blocks": 8192, 00:19:27.906 "block_size": 4096, 00:19:27.906 "physical_block_size": 4096, 00:19:27.906 "uuid": "c7f185c5-57a6-474d-a9e9-3530554d68b2", 00:19:27.906 "optimal_io_boundary": 0, 00:19:27.906 "md_size": 0, 00:19:27.906 "dif_type": 0, 00:19:27.906 "dif_is_head_of_md": false, 00:19:27.906 "dif_pi_format": 0 00:19:27.906 } 00:19:27.906 }, 00:19:27.906 { 00:19:27.907 "method": "bdev_wait_for_examine" 00:19:27.907 } 00:19:27.907 ] 00:19:27.907 }, 00:19:27.907 { 00:19:27.907 "subsystem": "nbd", 00:19:27.907 "config": [] 00:19:27.907 }, 00:19:27.907 { 00:19:27.907 "subsystem": "scheduler", 00:19:27.907 "config": [ 00:19:27.907 { 00:19:27.907 "method": "framework_set_scheduler", 00:19:27.907 "params": { 00:19:27.907 "name": "static" 00:19:27.907 } 00:19:27.907 } 00:19:27.907 ] 00:19:27.907 }, 00:19:27.907 { 00:19:27.907 "subsystem": "nvmf", 00:19:27.907 "config": [ 00:19:27.907 { 00:19:27.907 "method": "nvmf_set_config", 00:19:27.907 "params": { 00:19:27.907 "discovery_filter": "match_any", 00:19:27.907 "admin_cmd_passthru": { 00:19:27.907 "identify_ctrlr": false 00:19:27.907 }, 00:19:27.907 "dhchap_digests": [ 00:19:27.907 "sha256", 00:19:27.907 "sha384", 00:19:27.907 "sha512" 00:19:27.907 ], 00:19:27.907 "dhchap_dhgroups": [ 00:19:27.907 "null", 00:19:27.907 "ffdhe2048", 00:19:27.907 "ffdhe3072", 00:19:27.907 "ffdhe4096", 00:19:27.907 "ffdhe6144", 00:19:27.907 "ffdhe8192" 00:19:27.907 ] 00:19:27.907 } 00:19:27.907 }, 00:19:27.907 { 00:19:27.907 "method": "nvmf_set_max_subsystems", 00:19:27.907 "params": { 00:19:27.907 "max_subsystems": 1024 00:19:27.907 } 00:19:27.907 }, 00:19:27.907 { 00:19:27.907 "method": "nvmf_set_crdt", 00:19:27.907 "params": { 00:19:27.907 "crdt1": 0, 00:19:27.907 "crdt2": 0, 00:19:27.907 "crdt3": 0 00:19:27.907 } 00:19:27.907 }, 00:19:27.907 { 00:19:27.907 "method": "nvmf_create_transport", 00:19:27.907 "params": { 00:19:27.907 "trtype": "TCP", 00:19:27.907 "max_queue_depth": 128, 00:19:27.907 "max_io_qpairs_per_ctrlr": 127, 00:19:27.907 "in_capsule_data_size": 4096, 00:19:27.907 "max_io_size": 131072, 00:19:27.907 "io_unit_size": 131072, 00:19:27.907 "max_aq_depth": 128, 00:19:27.907 "num_shared_buffers": 511, 00:19:27.907 "buf_cache_size": 4294967295, 00:19:27.907 "dif_insert_or_strip": false, 00:19:27.907 "zcopy": false, 00:19:27.907 "c2h_success": false, 00:19:27.907 "sock_priority": 0, 00:19:27.907 "abort_timeout_sec": 1, 00:19:27.907 "ack_timeout": 0, 00:19:27.907 "data_wr_pool_size": 0 00:19:27.907 } 00:19:27.907 }, 00:19:27.907 { 00:19:27.907 "method": "nvmf_create_subsystem", 00:19:27.907 "params": { 00:19:27.907 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:27.907 "allow_any_host": false, 00:19:27.907 "serial_number": "SPDK00000000000001", 00:19:27.907 "model_number": "SPDK bdev Controller", 00:19:27.907 "max_namespaces": 10, 00:19:27.907 "min_cntlid": 1, 00:19:27.907 "max_cntlid": 65519, 00:19:27.907 "ana_reporting": false 00:19:27.907 } 00:19:27.907 }, 00:19:27.907 { 00:19:27.907 "method": "nvmf_subsystem_add_host", 00:19:27.907 "params": { 00:19:27.907 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:27.907 "host": "nqn.2016-06.io.spdk:host1", 00:19:27.907 "psk": "key0" 00:19:27.907 } 00:19:27.907 }, 00:19:27.907 { 00:19:27.907 "method": "nvmf_subsystem_add_ns", 00:19:27.907 "params": { 00:19:27.907 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:27.907 "namespace": { 00:19:27.907 "nsid": 1, 00:19:27.907 "bdev_name": "malloc0", 00:19:27.907 "nguid": "C7F185C557A6474DA9E93530554D68B2", 00:19:27.907 "uuid": "c7f185c5-57a6-474d-a9e9-3530554d68b2", 00:19:27.907 "no_auto_visible": false 00:19:27.907 } 00:19:27.907 } 00:19:27.907 }, 00:19:27.907 { 00:19:27.907 "method": "nvmf_subsystem_add_listener", 00:19:27.907 "params": { 00:19:27.907 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:27.907 "listen_address": { 00:19:27.907 "trtype": "TCP", 00:19:27.907 "adrfam": "IPv4", 00:19:27.907 "traddr": "10.0.0.3", 00:19:27.907 "trsvcid": "4420" 00:19:27.907 }, 00:19:27.907 "secure_channel": true 00:19:27.907 } 00:19:27.907 } 00:19:27.907 ] 00:19:27.907 } 00:19:27.907 ] 00:19:27.907 }' 00:19:27.907 14:23:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:28.167 14:23:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:19:28.167 "subsystems": [ 00:19:28.167 { 00:19:28.167 "subsystem": "keyring", 00:19:28.167 "config": [ 00:19:28.167 { 00:19:28.167 "method": "keyring_file_add_key", 00:19:28.167 "params": { 00:19:28.167 "name": "key0", 00:19:28.167 "path": "/tmp/tmp.jaVnDvSk3q" 00:19:28.167 } 00:19:28.167 } 00:19:28.167 ] 00:19:28.167 }, 00:19:28.167 { 00:19:28.167 "subsystem": "iobuf", 00:19:28.167 "config": [ 00:19:28.167 { 00:19:28.167 "method": "iobuf_set_options", 00:19:28.167 "params": { 00:19:28.167 "small_pool_count": 8192, 00:19:28.167 "large_pool_count": 1024, 00:19:28.167 "small_bufsize": 8192, 00:19:28.167 "large_bufsize": 135168, 00:19:28.167 "enable_numa": false 00:19:28.167 } 00:19:28.167 } 00:19:28.167 ] 00:19:28.167 }, 00:19:28.167 { 00:19:28.167 "subsystem": "sock", 00:19:28.167 "config": [ 00:19:28.167 { 00:19:28.167 "method": "sock_set_default_impl", 00:19:28.167 "params": { 00:19:28.167 "impl_name": "uring" 00:19:28.167 } 00:19:28.167 }, 00:19:28.167 { 00:19:28.167 "method": "sock_impl_set_options", 00:19:28.167 "params": { 00:19:28.167 "impl_name": "ssl", 00:19:28.167 "recv_buf_size": 4096, 00:19:28.167 "send_buf_size": 4096, 00:19:28.167 "enable_recv_pipe": true, 00:19:28.167 "enable_quickack": false, 00:19:28.167 "enable_placement_id": 0, 00:19:28.167 "enable_zerocopy_send_server": true, 00:19:28.167 "enable_zerocopy_send_client": false, 00:19:28.167 "zerocopy_threshold": 0, 00:19:28.167 "tls_version": 0, 00:19:28.167 "enable_ktls": false 00:19:28.167 } 00:19:28.167 }, 00:19:28.167 { 00:19:28.167 "method": "sock_impl_set_options", 00:19:28.167 "params": { 00:19:28.167 "impl_name": "posix", 00:19:28.167 "recv_buf_size": 2097152, 00:19:28.167 "send_buf_size": 2097152, 00:19:28.167 "enable_recv_pipe": true, 00:19:28.167 "enable_quickack": false, 00:19:28.167 "enable_placement_id": 0, 00:19:28.167 "enable_zerocopy_send_server": true, 00:19:28.167 "enable_zerocopy_send_client": false, 00:19:28.167 "zerocopy_threshold": 0, 00:19:28.167 "tls_version": 0, 00:19:28.167 "enable_ktls": false 00:19:28.167 } 00:19:28.167 }, 00:19:28.167 { 00:19:28.167 "method": "sock_impl_set_options", 00:19:28.167 "params": { 00:19:28.167 "impl_name": "uring", 00:19:28.167 "recv_buf_size": 2097152, 00:19:28.167 "send_buf_size": 2097152, 00:19:28.167 "enable_recv_pipe": true, 00:19:28.167 "enable_quickack": false, 00:19:28.167 "enable_placement_id": 0, 00:19:28.167 "enable_zerocopy_send_server": false, 00:19:28.167 "enable_zerocopy_send_client": false, 00:19:28.167 "zerocopy_threshold": 0, 00:19:28.167 "tls_version": 0, 00:19:28.167 "enable_ktls": false 00:19:28.167 } 00:19:28.167 } 00:19:28.167 ] 00:19:28.167 }, 00:19:28.167 { 00:19:28.167 "subsystem": "vmd", 00:19:28.167 "config": [] 00:19:28.167 }, 00:19:28.167 { 00:19:28.167 "subsystem": "accel", 00:19:28.167 "config": [ 00:19:28.167 { 00:19:28.167 "method": "accel_set_options", 00:19:28.167 "params": { 00:19:28.167 "small_cache_size": 128, 00:19:28.167 "large_cache_size": 16, 00:19:28.167 "task_count": 2048, 00:19:28.167 "sequence_count": 2048, 00:19:28.167 "buf_count": 2048 00:19:28.167 } 00:19:28.167 } 00:19:28.167 ] 00:19:28.167 }, 00:19:28.167 { 00:19:28.167 "subsystem": "bdev", 00:19:28.167 "config": [ 00:19:28.167 { 00:19:28.167 "method": "bdev_set_options", 00:19:28.167 "params": { 00:19:28.167 "bdev_io_pool_size": 65535, 00:19:28.167 "bdev_io_cache_size": 256, 00:19:28.167 "bdev_auto_examine": true, 00:19:28.167 "iobuf_small_cache_size": 128, 00:19:28.167 "iobuf_large_cache_size": 16 00:19:28.167 } 00:19:28.167 }, 00:19:28.167 { 00:19:28.167 "method": "bdev_raid_set_options", 00:19:28.167 "params": { 00:19:28.167 "process_window_size_kb": 1024, 00:19:28.167 "process_max_bandwidth_mb_sec": 0 00:19:28.167 } 00:19:28.167 }, 00:19:28.167 { 00:19:28.167 "method": "bdev_iscsi_set_options", 00:19:28.167 "params": { 00:19:28.167 "timeout_sec": 30 00:19:28.167 } 00:19:28.167 }, 00:19:28.167 { 00:19:28.167 "method": "bdev_nvme_set_options", 00:19:28.167 "params": { 00:19:28.167 "action_on_timeout": "none", 00:19:28.167 "timeout_us": 0, 00:19:28.168 "timeout_admin_us": 0, 00:19:28.168 "keep_alive_timeout_ms": 10000, 00:19:28.168 "arbitration_burst": 0, 00:19:28.168 "low_priority_weight": 0, 00:19:28.168 "medium_priority_weight": 0, 00:19:28.168 "high_priority_weight": 0, 00:19:28.168 "nvme_adminq_poll_period_us": 10000, 00:19:28.168 "nvme_ioq_poll_period_us": 0, 00:19:28.168 "io_queue_requests": 512, 00:19:28.168 "delay_cmd_submit": true, 00:19:28.168 "transport_retry_count": 4, 00:19:28.168 "bdev_retry_count": 3, 00:19:28.168 "transport_ack_timeout": 0, 00:19:28.168 "ctrlr_loss_timeout_sec": 0, 00:19:28.168 "reconnect_delay_sec": 0, 00:19:28.168 "fast_io_fail_timeout_sec": 0, 00:19:28.168 "disable_auto_failback": false, 00:19:28.168 "generate_uuids": false, 00:19:28.168 "transport_tos": 0, 00:19:28.168 "nvme_error_stat": false, 00:19:28.168 "rdma_srq_size": 0, 00:19:28.168 "io_path_stat": false, 00:19:28.168 "allow_accel_sequence": false, 00:19:28.168 "rdma_max_cq_size": 0, 00:19:28.168 "rdma_cm_event_timeout_ms": 0, 00:19:28.168 "dhchap_digests": [ 00:19:28.168 "sha256", 00:19:28.168 "sha384", 00:19:28.168 "sha512" 00:19:28.168 ], 00:19:28.168 "dhchap_dhgroups": [ 00:19:28.168 "null", 00:19:28.168 "ffdhe2048", 00:19:28.168 "ffdhe3072", 00:19:28.168 "ffdhe4096", 00:19:28.168 "ffdhe6144", 00:19:28.168 "ffdhe8192" 00:19:28.168 ] 00:19:28.168 } 00:19:28.168 }, 00:19:28.168 { 00:19:28.168 "method": "bdev_nvme_attach_controller", 00:19:28.168 "params": { 00:19:28.168 "name": "TLSTEST", 00:19:28.168 "trtype": "TCP", 00:19:28.168 "adrfam": "IPv4", 00:19:28.168 "traddr": "10.0.0.3", 00:19:28.168 "trsvcid": "4420", 00:19:28.168 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:28.168 "prchk_reftag": false, 00:19:28.168 "prchk_guard": false, 00:19:28.168 "ctrlr_loss_timeout_sec": 0, 00:19:28.168 "reconnect_delay_sec": 0, 00:19:28.168 "fast_io_fail_timeout_sec": 0, 00:19:28.168 "psk": "key0", 00:19:28.168 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:28.168 "hdgst": false, 00:19:28.168 "ddgst": false, 00:19:28.168 "multipath": "multipath" 00:19:28.168 } 00:19:28.168 }, 00:19:28.168 { 00:19:28.168 "method": "bdev_nvme_set_hotplug", 00:19:28.168 "params": { 00:19:28.168 "period_us": 100000, 00:19:28.168 "enable": false 00:19:28.168 } 00:19:28.168 }, 00:19:28.168 { 00:19:28.168 "method": "bdev_wait_for_examine" 00:19:28.168 } 00:19:28.168 ] 00:19:28.168 }, 00:19:28.168 { 00:19:28.168 "subsystem": "nbd", 00:19:28.168 "config": [] 00:19:28.168 } 00:19:28.168 ] 00:19:28.168 }' 00:19:28.168 14:23:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 75497 00:19:28.168 14:23:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 75497 ']' 00:19:28.168 14:23:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 75497 00:19:28.168 14:23:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:28.168 14:23:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:28.168 14:23:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75497 00:19:28.168 killing process with pid 75497 00:19:28.168 Received shutdown signal, test time was about 10.000000 seconds 00:19:28.168 00:19:28.168 Latency(us) 00:19:28.168 [2024-11-06T14:23:55.803Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:28.168 [2024-11-06T14:23:55.803Z] =================================================================================================================== 00:19:28.168 [2024-11-06T14:23:55.803Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:28.168 14:23:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:28.168 14:23:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:28.168 14:23:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75497' 00:19:28.168 14:23:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 75497 00:19:28.168 14:23:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 75497 00:19:29.546 14:23:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 75441 00:19:29.546 14:23:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 75441 ']' 00:19:29.546 14:23:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 75441 00:19:29.546 14:23:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:29.546 14:23:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:29.546 14:23:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75441 00:19:29.546 killing process with pid 75441 00:19:29.546 14:23:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:29.546 14:23:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:29.546 14:23:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75441' 00:19:29.546 14:23:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 75441 00:19:29.546 14:23:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 75441 00:19:30.960 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:19:30.960 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:30.960 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:30.960 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:19:30.960 "subsystems": [ 00:19:30.960 { 00:19:30.960 "subsystem": "keyring", 00:19:30.960 "config": [ 00:19:30.960 { 00:19:30.960 "method": "keyring_file_add_key", 00:19:30.960 "params": { 00:19:30.960 "name": "key0", 00:19:30.960 "path": "/tmp/tmp.jaVnDvSk3q" 00:19:30.960 } 00:19:30.960 } 00:19:30.960 ] 00:19:30.960 }, 00:19:30.960 { 00:19:30.960 "subsystem": "iobuf", 00:19:30.960 "config": [ 00:19:30.960 { 00:19:30.960 "method": "iobuf_set_options", 00:19:30.960 "params": { 00:19:30.960 "small_pool_count": 8192, 00:19:30.960 "large_pool_count": 1024, 00:19:30.960 "small_bufsize": 8192, 00:19:30.960 "large_bufsize": 135168, 00:19:30.960 "enable_numa": false 00:19:30.960 } 00:19:30.960 } 00:19:30.960 ] 00:19:30.960 }, 00:19:30.960 { 00:19:30.960 "subsystem": "sock", 00:19:30.960 "config": [ 00:19:30.960 { 00:19:30.960 "method": "sock_set_default_impl", 00:19:30.960 "params": { 00:19:30.960 "impl_name": "uring" 00:19:30.960 } 00:19:30.960 }, 00:19:30.960 { 00:19:30.960 "method": "sock_impl_set_options", 00:19:30.960 "params": { 00:19:30.960 "impl_name": "ssl", 00:19:30.960 "recv_buf_size": 4096, 00:19:30.960 "send_buf_size": 4096, 00:19:30.960 "enable_recv_pipe": true, 00:19:30.960 "enable_quickack": false, 00:19:30.960 "enable_placement_id": 0, 00:19:30.960 "enable_zerocopy_send_server": true, 00:19:30.960 "enable_zerocopy_send_client": false, 00:19:30.960 "zerocopy_threshold": 0, 00:19:30.960 "tls_version": 0, 00:19:30.960 "enable_ktls": false 00:19:30.960 } 00:19:30.960 }, 00:19:30.960 { 00:19:30.960 "method": "sock_impl_set_options", 00:19:30.960 "params": { 00:19:30.960 "impl_name": "posix", 00:19:30.960 "recv_buf_size": 2097152, 00:19:30.960 "send_buf_size": 2097152, 00:19:30.960 "enable_recv_pipe": true, 00:19:30.960 "enable_quickack": false, 00:19:30.960 "enable_placement_id": 0, 00:19:30.960 "enable_zerocopy_send_server": true, 00:19:30.960 "enable_zerocopy_send_client": false, 00:19:30.960 "zerocopy_threshold": 0, 00:19:30.960 "tls_version": 0, 00:19:30.960 "enable_ktls": false 00:19:30.960 } 00:19:30.960 }, 00:19:30.960 { 00:19:30.960 "method": "sock_impl_set_options", 00:19:30.960 "params": { 00:19:30.960 "impl_name": "uring", 00:19:30.960 "recv_buf_size": 2097152, 00:19:30.960 "send_buf_size": 2097152, 00:19:30.960 "enable_recv_pipe": true, 00:19:30.960 "enable_quickack": false, 00:19:30.960 "enable_placement_id": 0, 00:19:30.960 "enable_zerocopy_send_server": false, 00:19:30.960 "enable_zerocopy_send_client": false, 00:19:30.960 "zerocopy_threshold": 0, 00:19:30.960 "tls_version": 0, 00:19:30.960 "enable_ktls": false 00:19:30.960 } 00:19:30.960 } 00:19:30.960 ] 00:19:30.960 }, 00:19:30.960 { 00:19:30.960 "subsystem": "vmd", 00:19:30.960 "config": [] 00:19:30.960 }, 00:19:30.960 { 00:19:30.960 "subsystem": "accel", 00:19:30.960 "config": [ 00:19:30.960 { 00:19:30.960 "method": "accel_set_options", 00:19:30.960 "params": { 00:19:30.960 "small_cache_size": 128, 00:19:30.960 "large_cache_size": 16, 00:19:30.960 "task_count": 2048, 00:19:30.960 "sequence_count": 2048, 00:19:30.960 "buf_count": 2048 00:19:30.960 } 00:19:30.960 } 00:19:30.960 ] 00:19:30.960 }, 00:19:30.960 { 00:19:30.960 "subsystem": "bdev", 00:19:30.960 "config": [ 00:19:30.960 { 00:19:30.961 "method": "bdev_set_options", 00:19:30.961 "params": { 00:19:30.961 "bdev_io_pool_size": 65535, 00:19:30.961 "bdev_io_cache_size": 256, 00:19:30.961 "bdev_auto_examine": true, 00:19:30.961 "iobuf_small_cache_size": 128, 00:19:30.961 "iobuf_large_cache_size": 16 00:19:30.961 } 00:19:30.961 }, 00:19:30.961 { 00:19:30.961 "method": "bdev_raid_set_options", 00:19:30.961 "params": { 00:19:30.961 "process_window_size_kb": 1024, 00:19:30.961 "process_max_bandwidth_mb_sec": 0 00:19:30.961 } 00:19:30.961 }, 00:19:30.961 { 00:19:30.961 "method": "bdev_iscsi_set_options", 00:19:30.961 "params": { 00:19:30.961 "timeout_sec": 30 00:19:30.961 } 00:19:30.961 }, 00:19:30.961 { 00:19:30.961 "method": "bdev_nvme_set_options", 00:19:30.961 "params": { 00:19:30.961 "action_on_timeout": "none", 00:19:30.961 "timeout_us": 0, 00:19:30.961 "timeout_admin_us": 0, 00:19:30.961 "keep_alive_timeout_ms": 10000, 00:19:30.961 "arbitration_burst": 0, 00:19:30.961 "low_priority_weight": 0, 00:19:30.961 "medium_priority_weight": 0, 00:19:30.961 "high_priority_weight": 0, 00:19:30.961 "nvme_adminq_poll_period_us": 10000, 00:19:30.961 "nvme_ioq_poll_period_us": 0, 00:19:30.961 "io_queue_requests": 0, 00:19:30.961 "delay_cmd_submit": true, 00:19:30.961 "transport_retry_count": 4, 00:19:30.961 "bdev_retry_count": 3, 00:19:30.961 "transport_ack_timeout": 0, 00:19:30.961 "ctrlr_loss_timeout_sec": 0, 00:19:30.961 "reconnect_delay_sec": 0, 00:19:30.961 "fast_io_fail_timeout_sec": 0, 00:19:30.961 "disable_auto_failback": false, 00:19:30.961 "generate_uuids": false, 00:19:30.961 "transport_tos": 0, 00:19:30.961 "nvme_error_stat": false, 00:19:30.961 "rdma_srq_size": 0, 00:19:30.961 "io_path_stat": false, 00:19:30.961 "allow_accel_sequence": false, 00:19:30.961 "rdma_max_cq_size": 0, 00:19:30.961 "rdma_cm_event_timeout_ms": 0, 00:19:30.961 "dhchap_digests": [ 00:19:30.961 "sha256", 00:19:30.961 "sha384", 00:19:30.961 "sha512" 00:19:30.961 ], 00:19:30.961 "dhchap_dhgroups": [ 00:19:30.961 "null", 00:19:30.961 "ffdhe2048", 00:19:30.961 "ffdhe3072", 00:19:30.961 "ffdhe4096", 00:19:30.961 "ffdhe6144", 00:19:30.961 "ffdhe8192" 00:19:30.961 ] 00:19:30.961 } 00:19:30.961 }, 00:19:30.961 { 00:19:30.961 "method": "bdev_nvme_set_hotplug", 00:19:30.961 "params": { 00:19:30.961 "period_us": 100000, 00:19:30.961 "enable": false 00:19:30.961 } 00:19:30.961 }, 00:19:30.961 { 00:19:30.961 "method": "bdev_malloc_create", 00:19:30.961 "params": { 00:19:30.961 "name": "malloc0", 00:19:30.961 "num_blocks": 8192, 00:19:30.961 "block_size": 4096, 00:19:30.961 "physical_block_size": 4096, 00:19:30.961 "uuid": "c7f185c5-57a6-474d-a9e9-3530554d68b2", 00:19:30.961 "optimal_io_boundary": 0, 00:19:30.961 "md_size": 0, 00:19:30.961 "dif_type": 0, 00:19:30.961 "dif_is_head_of_md": false, 00:19:30.961 "dif_pi_format": 0 00:19:30.961 } 00:19:30.961 }, 00:19:30.961 { 00:19:30.961 "method": "bdev_wait_for_examine" 00:19:30.961 } 00:19:30.961 ] 00:19:30.961 }, 00:19:30.961 { 00:19:30.961 "subsystem": "nbd", 00:19:30.961 "config": [] 00:19:30.961 }, 00:19:30.961 { 00:19:30.961 "subsystem": "scheduler", 00:19:30.961 "config": [ 00:19:30.961 { 00:19:30.961 "method": "framework_set_scheduler", 00:19:30.961 "params": { 00:19:30.961 "name": "static" 00:19:30.961 } 00:19:30.961 } 00:19:30.961 ] 00:19:30.961 }, 00:19:30.961 { 00:19:30.961 "subsystem": "nvmf", 00:19:30.961 "config": [ 00:19:30.961 { 00:19:30.961 "method": "nvmf_set_config", 00:19:30.961 "params": { 00:19:30.961 "discovery_filter": "match_any", 00:19:30.961 "admin_cmd_passthru": { 00:19:30.961 "identify_ctrlr": false 00:19:30.961 }, 00:19:30.961 "dhchap_digests": [ 00:19:30.961 "sha256", 00:19:30.961 "sha384", 00:19:30.961 "sha512" 00:19:30.961 ], 00:19:30.961 "dhchap_dhgroups": [ 00:19:30.961 "null", 00:19:30.961 "ffdhe2048", 00:19:30.961 "ffdhe3072", 00:19:30.961 "ffdhe4096", 00:19:30.961 "ffdhe6144", 00:19:30.961 "ffdhe8192" 00:19:30.961 ] 00:19:30.961 } 00:19:30.961 }, 00:19:30.961 { 00:19:30.961 "method": "nvmf_set_max_subsystems", 00:19:30.961 "params": { 00:19:30.961 "max_subsystems": 1024 00:19:30.961 } 00:19:30.961 }, 00:19:30.961 { 00:19:30.961 "method": "nvmf_set_crdt", 00:19:30.961 "params": { 00:19:30.961 "crdt1": 0, 00:19:30.961 "crdt2": 0, 00:19:30.961 "crdt3": 0 00:19:30.961 } 00:19:30.961 }, 00:19:30.961 { 00:19:30.961 "method": "nvmf_create_transport", 00:19:30.961 "params": { 00:19:30.961 "trtype": "TCP", 00:19:30.961 "max_queue_depth": 128, 00:19:30.961 "max_io_qpairs_per_ctrlr": 127, 00:19:30.961 "in_capsule_data_size": 4096, 00:19:30.961 "max_io_size": 131072, 00:19:30.961 "io_unit_size": 131072, 00:19:30.961 "max_aq_depth": 128, 00:19:30.961 "num_shared_buffers": 511, 00:19:30.961 "buf_cache_size": 4294967295, 00:19:30.961 "dif_insert_or_strip": false, 00:19:30.961 "zcopy": false, 00:19:30.961 "c2h_success": false, 00:19:30.961 "sock_priority": 0, 00:19:30.961 "abort_timeout_sec": 1, 00:19:30.961 "ack_timeout": 0, 00:19:30.961 "data_wr_pool_size": 0 00:19:30.961 } 00:19:30.961 }, 00:19:30.961 { 00:19:30.961 "method": "nvmf_create_subsystem", 00:19:30.961 "params": { 00:19:30.961 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:30.961 "allow_any_host": false, 00:19:30.961 "serial_number": "SPDK00000000000001", 00:19:30.961 "model_number": "SPDK bdev Controller", 00:19:30.961 "max_namespaces": 10, 00:19:30.961 "min_cntlid": 1, 00:19:30.961 "max_cntlid": 65519, 00:19:30.961 "ana_reporting": false 00:19:30.961 } 00:19:30.961 }, 00:19:30.961 { 00:19:30.961 "method": "nvmf_subsystem_add_host", 00:19:30.961 "params": { 00:19:30.961 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:30.961 "host": "nqn.2016-06.io.spdk:host1", 00:19:30.961 "psk": "key0" 00:19:30.961 } 00:19:30.961 }, 00:19:30.961 { 00:19:30.961 "method": "nvmf_subsystem_add_ns", 00:19:30.961 "params": { 00:19:30.961 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:30.961 "namespace": { 00:19:30.961 "nsid": 1, 00:19:30.961 "bdev_name": "malloc0", 00:19:30.961 "nguid": "C7F185C557A6474DA9E93530554D68B2", 00:19:30.961 "uuid": "c7f185c5-57a6-474d-a9e9-3530554d68b2", 00:19:30.961 "no_auto_visible": false 00:19:30.961 } 00:19:30.961 } 00:19:30.961 }, 00:19:30.961 { 00:19:30.961 "method": "nvmf_subsystem_add_listener", 00:19:30.961 "params": { 00:19:30.961 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:30.961 "listen_address": { 00:19:30.961 "trtype": "TCP", 00:19:30.961 "adrfam": "IPv4", 00:19:30.961 "traddr": "10.0.0.3", 00:19:30.961 "trsvcid": "4420" 00:19:30.961 }, 00:19:30.961 "secure_channel": true 00:19:30.961 } 00:19:30.961 } 00:19:30.961 ] 00:19:30.961 } 00:19:30.961 ] 00:19:30.961 }' 00:19:30.961 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:30.961 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=75565 00:19:30.961 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:19:30.961 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 75565 00:19:30.961 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 75565 ']' 00:19:30.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:30.961 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:30.961 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:30.962 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:30.962 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:30.962 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:30.962 [2024-11-06 14:23:58.327196] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:19:30.962 [2024-11-06 14:23:58.327319] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:30.962 [2024-11-06 14:23:58.515923] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:31.220 [2024-11-06 14:23:58.668569] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:31.221 [2024-11-06 14:23:58.668628] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:31.221 [2024-11-06 14:23:58.668645] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:31.221 [2024-11-06 14:23:58.668667] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:31.221 [2024-11-06 14:23:58.668679] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:31.221 [2024-11-06 14:23:58.670172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:31.479 [2024-11-06 14:23:59.037563] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:31.738 [2024-11-06 14:23:59.264453] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:31.738 [2024-11-06 14:23:59.296330] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:31.738 [2024-11-06 14:23:59.296596] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:31.738 14:23:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:31.738 14:23:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:31.738 14:23:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:31.738 14:23:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:31.738 14:23:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:31.996 14:23:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:31.996 14:23:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=75597 00:19:31.996 14:23:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 75597 /var/tmp/bdevperf.sock 00:19:31.996 14:23:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:19:31.996 14:23:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 75597 ']' 00:19:31.996 14:23:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:19:31.996 "subsystems": [ 00:19:31.996 { 00:19:31.996 "subsystem": "keyring", 00:19:31.996 "config": [ 00:19:31.996 { 00:19:31.996 "method": "keyring_file_add_key", 00:19:31.996 "params": { 00:19:31.996 "name": "key0", 00:19:31.996 "path": "/tmp/tmp.jaVnDvSk3q" 00:19:31.996 } 00:19:31.996 } 00:19:31.996 ] 00:19:31.996 }, 00:19:31.996 { 00:19:31.996 "subsystem": "iobuf", 00:19:31.996 "config": [ 00:19:31.996 { 00:19:31.996 "method": "iobuf_set_options", 00:19:31.996 "params": { 00:19:31.996 "small_pool_count": 8192, 00:19:31.997 "large_pool_count": 1024, 00:19:31.997 "small_bufsize": 8192, 00:19:31.997 "large_bufsize": 135168, 00:19:31.997 "enable_numa": false 00:19:31.997 } 00:19:31.997 } 00:19:31.997 ] 00:19:31.997 }, 00:19:31.997 { 00:19:31.997 "subsystem": "sock", 00:19:31.997 "config": [ 00:19:31.997 { 00:19:31.997 "method": "sock_set_default_impl", 00:19:31.997 "params": { 00:19:31.997 "impl_name": "uring" 00:19:31.997 } 00:19:31.997 }, 00:19:31.997 { 00:19:31.997 "method": "sock_impl_set_options", 00:19:31.997 "params": { 00:19:31.997 "impl_name": "ssl", 00:19:31.997 "recv_buf_size": 4096, 00:19:31.997 "send_buf_size": 4096, 00:19:31.997 "enable_recv_pipe": true, 00:19:31.997 "enable_quickack": false, 00:19:31.997 "enable_placement_id": 0, 00:19:31.997 "enable_zerocopy_send_server": true, 00:19:31.997 "enable_zerocopy_send_client": false, 00:19:31.997 "zerocopy_threshold": 0, 00:19:31.997 "tls_version": 0, 00:19:31.997 "enable_ktls": false 00:19:31.997 } 00:19:31.997 }, 00:19:31.997 { 00:19:31.997 "method": "sock_impl_set_options", 00:19:31.997 "params": { 00:19:31.997 "impl_name": "posix", 00:19:31.997 "recv_buf_size": 2097152, 00:19:31.997 "send_buf_size": 2097152, 00:19:31.997 "enable_recv_pipe": true, 00:19:31.997 "enable_quickack": false, 00:19:31.997 "enable_placement_id": 0, 00:19:31.997 "enable_zerocopy_send_server": true, 00:19:31.997 "enable_zerocopy_send_client": false, 00:19:31.997 "zerocopy_threshold": 0, 00:19:31.997 "tls_version": 0, 00:19:31.997 "enable_ktls": false 00:19:31.997 } 00:19:31.997 }, 00:19:31.997 { 00:19:31.997 "method": "sock_impl_set_options", 00:19:31.997 "params": { 00:19:31.997 "impl_name": "uring", 00:19:31.997 "recv_buf_size": 2097152, 00:19:31.997 "send_buf_size": 2097152, 00:19:31.997 "enable_recv_pipe": true, 00:19:31.997 "enable_quickack": false, 00:19:31.997 "enable_placement_id": 0, 00:19:31.997 "enable_zerocopy_send_server": false, 00:19:31.997 "enable_zerocopy_send_client": false, 00:19:31.997 "zerocopy_threshold": 0, 00:19:31.997 "tls_version": 0, 00:19:31.997 "enable_ktls": false 00:19:31.997 } 00:19:31.997 } 00:19:31.997 ] 00:19:31.997 }, 00:19:31.997 { 00:19:31.997 "subsystem": "vmd", 00:19:31.997 "config": [] 00:19:31.997 }, 00:19:31.997 { 00:19:31.997 "subsystem": "accel", 00:19:31.997 "config": [ 00:19:31.997 { 00:19:31.997 "method": "accel_set_options", 00:19:31.997 "params": { 00:19:31.997 "small_cache_size": 128, 00:19:31.997 "large_cache_size": 16, 00:19:31.997 "task_count": 2048, 00:19:31.997 "sequence_count": 2048, 00:19:31.997 "buf_count": 2048 00:19:31.997 } 00:19:31.997 } 00:19:31.997 ] 00:19:31.997 }, 00:19:31.997 { 00:19:31.997 "subsystem": "bdev", 00:19:31.997 "config": [ 00:19:31.997 { 00:19:31.997 "method": "bdev_set_options", 00:19:31.997 "params": { 00:19:31.997 "bdev_io_pool_size": 65535, 00:19:31.997 "bdev_io_cache_size": 256, 00:19:31.997 "bdev_auto_examine": true, 00:19:31.997 "iobuf_small_cache_size": 128, 00:19:31.997 "iobuf_large_cache_size": 16 00:19:31.997 } 00:19:31.997 }, 00:19:31.997 { 00:19:31.997 "method": "bdev_raid_set_options", 00:19:31.997 "params": { 00:19:31.997 "process_window_size_kb": 1024, 00:19:31.997 "process_max_bandwidth_mb_sec": 0 00:19:31.997 } 00:19:31.997 }, 00:19:31.997 { 00:19:31.997 "method": "bdev_iscsi_set_options", 00:19:31.997 "params": { 00:19:31.997 "timeout_sec": 30 00:19:31.997 } 00:19:31.997 }, 00:19:31.997 { 00:19:31.997 "method": "bdev_nvme_set_options", 00:19:31.997 "params": { 00:19:31.997 "action_on_timeout": "none", 00:19:31.997 "timeout_us": 0, 00:19:31.997 "timeout_admin_us": 0, 00:19:31.997 "keep_alive_timeout_ms": 10000, 00:19:31.997 "arbitration_burst": 0, 00:19:31.997 "low_priority_weight": 0, 00:19:31.997 "medium_priority_weight": 0, 00:19:31.997 "high_priority_weight": 0, 00:19:31.997 "nvme_adminq_poll_period_us": 10000, 00:19:31.997 "nvme_ioq_poll_period_us": 0, 00:19:31.997 "io_queue_requests": 512, 00:19:31.997 "delay_cmd_submit": true, 00:19:31.997 "transport_retry_count": 4, 00:19:31.997 "bdev_retry_count": 3, 00:19:31.997 "transport_ack_timeout": 0, 00:19:31.997 "ctrlr_loss_timeout_sec": 0, 00:19:31.997 "reconnect_delay_sec": 0, 00:19:31.997 "fast_io_fail_timeout_sec": 0, 00:19:31.997 "disable_auto_failback": false, 00:19:31.997 "generate_uuids": false, 00:19:31.997 "transport_tos": 0, 00:19:31.997 "nvme_error_stat": false, 00:19:31.997 "rdma_srq_size": 0, 00:19:31.997 "io_path_stat": false, 00:19:31.997 "allow_accel_sequence": false, 00:19:31.997 "rdma_max_cq_size": 0, 00:19:31.997 "rdma_cm_event_timeout_ms": 0, 00:19:31.997 "dhchap_digests": [ 00:19:31.997 "sha256", 00:19:31.997 "sha384", 00:19:31.997 "sha512" 00:19:31.997 ], 00:19:31.997 "dhchap_dhgroups": [ 00:19:31.997 "null", 00:19:31.997 "ffdhe2048", 00:19:31.997 "ffdhe3072", 00:19:31.997 "ffdhe4096", 00:19:31.997 "ffdhe6144", 00:19:31.997 "ffdhe8192" 00:19:31.997 ] 00:19:31.997 } 00:19:31.997 }, 00:19:31.997 { 00:19:31.997 "method": "bdev_nvme_attach_controller", 00:19:31.997 "params": { 00:19:31.997 "name": "TLSTEST", 00:19:31.997 "trtype": "TCP", 00:19:31.997 "adrfam": "IPv4", 00:19:31.997 "traddr": "10.0.0.3", 00:19:31.997 "trsvcid": "4420", 00:19:31.997 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:31.997 "prchk_reftag": false, 00:19:31.997 "prchk_guard": false, 00:19:31.997 "ctrlr_loss_timeout_sec": 0, 00:19:31.997 "reconnect_delay_sec": 0, 00:19:31.997 "fast_io_fail_timeout_sec": 0, 00:19:31.997 "psk": "key0", 00:19:31.997 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:31.997 "hdgst": false, 00:19:31.997 "ddgst": false, 00:19:31.997 "multipath": "multipath" 00:19:31.997 } 00:19:31.997 }, 00:19:31.997 { 00:19:31.997 "method": "bdev_nvme_set_hotplug", 00:19:31.997 "params": { 00:19:31.997 "period_us": 100000, 00:19:31.997 "enable": false 00:19:31.997 } 00:19:31.997 }, 00:19:31.997 { 00:19:31.997 "method": "bdev_wait_for_examine" 00:19:31.997 } 00:19:31.997 ] 00:19:31.997 }, 00:19:31.997 { 00:19:31.997 "subsystem": "nbd", 00:19:31.997 "config": [] 00:19:31.997 } 00:19:31.997 ] 00:19:31.997 }' 00:19:31.997 14:23:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:31.997 14:23:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:31.997 14:23:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:31.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:31.997 14:23:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:31.997 14:23:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:31.997 [2024-11-06 14:23:59.508227] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:19:31.997 [2024-11-06 14:23:59.508396] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75597 ] 00:19:32.255 [2024-11-06 14:23:59.684760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:32.255 [2024-11-06 14:23:59.805252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:32.513 [2024-11-06 14:24:00.103552] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:32.771 [2024-11-06 14:24:00.231831] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:32.771 14:24:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:32.771 14:24:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:32.771 14:24:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:33.030 Running I/O for 10 seconds... 00:19:34.915 3978.00 IOPS, 15.54 MiB/s [2024-11-06T14:24:03.487Z] 3980.50 IOPS, 15.55 MiB/s [2024-11-06T14:24:04.863Z] 3893.33 IOPS, 15.21 MiB/s [2024-11-06T14:24:05.800Z] 3902.25 IOPS, 15.24 MiB/s [2024-11-06T14:24:06.738Z] 3875.00 IOPS, 15.14 MiB/s [2024-11-06T14:24:07.674Z] 3880.67 IOPS, 15.16 MiB/s [2024-11-06T14:24:08.610Z] 3909.57 IOPS, 15.27 MiB/s [2024-11-06T14:24:09.550Z] 3910.12 IOPS, 15.27 MiB/s [2024-11-06T14:24:10.486Z] 3913.44 IOPS, 15.29 MiB/s [2024-11-06T14:24:10.486Z] 3918.00 IOPS, 15.30 MiB/s 00:19:42.851 Latency(us) 00:19:42.851 [2024-11-06T14:24:10.486Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:42.851 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:42.851 Verification LBA range: start 0x0 length 0x2000 00:19:42.851 TLSTESTn1 : 10.02 3924.24 15.33 0.00 0.00 32568.40 5369.21 40005.91 00:19:42.851 [2024-11-06T14:24:10.486Z] =================================================================================================================== 00:19:42.851 [2024-11-06T14:24:10.486Z] Total : 3924.24 15.33 0.00 0.00 32568.40 5369.21 40005.91 00:19:42.851 { 00:19:42.851 "results": [ 00:19:42.851 { 00:19:42.851 "job": "TLSTESTn1", 00:19:42.851 "core_mask": "0x4", 00:19:42.851 "workload": "verify", 00:19:42.851 "status": "finished", 00:19:42.851 "verify_range": { 00:19:42.851 "start": 0, 00:19:42.851 "length": 8192 00:19:42.851 }, 00:19:42.851 "queue_depth": 128, 00:19:42.851 "io_size": 4096, 00:19:42.851 "runtime": 10.015955, 00:19:42.851 "iops": 3924.2388768719507, 00:19:42.851 "mibps": 15.329058112781057, 00:19:42.851 "io_failed": 0, 00:19:42.851 "io_timeout": 0, 00:19:42.851 "avg_latency_us": 32568.399002875773, 00:19:42.851 "min_latency_us": 5369.2144578313255, 00:19:42.851 "max_latency_us": 40005.91164658635 00:19:42.851 } 00:19:42.851 ], 00:19:42.851 "core_count": 1 00:19:42.851 } 00:19:43.110 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:43.110 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 75597 00:19:43.110 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 75597 ']' 00:19:43.110 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 75597 00:19:43.110 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:43.110 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:43.110 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75597 00:19:43.110 killing process with pid 75597 00:19:43.110 Received shutdown signal, test time was about 10.000000 seconds 00:19:43.110 00:19:43.110 Latency(us) 00:19:43.110 [2024-11-06T14:24:10.745Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:43.110 [2024-11-06T14:24:10.745Z] =================================================================================================================== 00:19:43.110 [2024-11-06T14:24:10.745Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:43.110 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:43.110 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:43.110 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75597' 00:19:43.110 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 75597 00:19:43.110 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 75597 00:19:44.488 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 75565 00:19:44.488 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 75565 ']' 00:19:44.488 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 75565 00:19:44.488 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:44.488 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:44.488 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75565 00:19:44.488 killing process with pid 75565 00:19:44.488 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:44.488 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:44.488 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75565' 00:19:44.488 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 75565 00:19:44.488 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 75565 00:19:45.868 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:19:45.868 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:45.868 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:45.868 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:45.868 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=75760 00:19:45.868 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:45.868 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 75760 00:19:45.868 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 75760 ']' 00:19:45.868 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:45.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:45.868 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:45.868 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:45.868 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:45.868 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:45.868 [2024-11-06 14:24:13.288686] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:19:45.868 [2024-11-06 14:24:13.288816] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:45.868 [2024-11-06 14:24:13.476809] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:46.127 [2024-11-06 14:24:13.631295] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:46.127 [2024-11-06 14:24:13.631614] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:46.127 [2024-11-06 14:24:13.631650] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:46.127 [2024-11-06 14:24:13.631676] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:46.127 [2024-11-06 14:24:13.631693] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:46.127 [2024-11-06 14:24:13.633195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:46.386 [2024-11-06 14:24:13.883887] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:46.645 14:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:46.645 14:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:46.645 14:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:46.645 14:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:46.645 14:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:46.645 14:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:46.645 14:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.jaVnDvSk3q 00:19:46.645 14:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.jaVnDvSk3q 00:19:46.645 14:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:46.903 [2024-11-06 14:24:14.387062] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:46.904 14:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:47.163 14:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:19:47.422 [2024-11-06 14:24:14.814727] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:47.422 [2024-11-06 14:24:14.815116] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:47.422 14:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:47.682 malloc0 00:19:47.682 14:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:47.941 14:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.jaVnDvSk3q 00:19:47.941 14:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:48.200 14:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=75810 00:19:48.200 14:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:48.200 14:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:48.200 14:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 75810 /var/tmp/bdevperf.sock 00:19:48.200 14:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 75810 ']' 00:19:48.200 14:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:48.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:48.200 14:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:48.200 14:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:48.200 14:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:48.200 14:24:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:48.460 [2024-11-06 14:24:15.845807] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:19:48.460 [2024-11-06 14:24:15.845944] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75810 ] 00:19:48.460 [2024-11-06 14:24:16.025473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:48.741 [2024-11-06 14:24:16.189680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:49.000 [2024-11-06 14:24:16.397729] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:49.258 14:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:49.258 14:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:49.258 14:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.jaVnDvSk3q 00:19:49.517 14:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:49.517 [2024-11-06 14:24:17.128194] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:49.775 nvme0n1 00:19:49.775 14:24:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:49.775 Running I/O for 1 seconds... 00:19:51.150 3328.00 IOPS, 13.00 MiB/s 00:19:51.150 Latency(us) 00:19:51.150 [2024-11-06T14:24:18.785Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:51.150 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:51.150 Verification LBA range: start 0x0 length 0x2000 00:19:51.150 nvme0n1 : 1.02 3381.70 13.21 0.00 0.00 37401.28 10685.79 25898.56 00:19:51.150 [2024-11-06T14:24:18.785Z] =================================================================================================================== 00:19:51.150 [2024-11-06T14:24:18.785Z] Total : 3381.70 13.21 0.00 0.00 37401.28 10685.79 25898.56 00:19:51.150 { 00:19:51.150 "results": [ 00:19:51.150 { 00:19:51.150 "job": "nvme0n1", 00:19:51.150 "core_mask": "0x2", 00:19:51.150 "workload": "verify", 00:19:51.150 "status": "finished", 00:19:51.150 "verify_range": { 00:19:51.150 "start": 0, 00:19:51.150 "length": 8192 00:19:51.150 }, 00:19:51.150 "queue_depth": 128, 00:19:51.150 "io_size": 4096, 00:19:51.150 "runtime": 1.021971, 00:19:51.150 "iops": 3381.7006549109515, 00:19:51.150 "mibps": 13.209768183245904, 00:19:51.150 "io_failed": 0, 00:19:51.150 "io_timeout": 0, 00:19:51.150 "avg_latency_us": 37401.28395061728, 00:19:51.150 "min_latency_us": 10685.789558232931, 00:19:51.150 "max_latency_us": 25898.563855421686 00:19:51.150 } 00:19:51.150 ], 00:19:51.150 "core_count": 1 00:19:51.150 } 00:19:51.150 14:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 75810 00:19:51.150 14:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 75810 ']' 00:19:51.150 14:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 75810 00:19:51.150 14:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:51.150 14:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:51.150 14:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75810 00:19:51.150 killing process with pid 75810 00:19:51.150 Received shutdown signal, test time was about 1.000000 seconds 00:19:51.150 00:19:51.151 Latency(us) 00:19:51.151 [2024-11-06T14:24:18.786Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:51.151 [2024-11-06T14:24:18.786Z] =================================================================================================================== 00:19:51.151 [2024-11-06T14:24:18.786Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:51.151 14:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:51.151 14:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:51.151 14:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75810' 00:19:51.151 14:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 75810 00:19:51.151 14:24:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 75810 00:19:52.087 14:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 75760 00:19:52.087 14:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 75760 ']' 00:19:52.087 14:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 75760 00:19:52.087 14:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:52.087 14:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:52.087 14:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75760 00:19:52.087 killing process with pid 75760 00:19:52.087 14:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:52.087 14:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:52.087 14:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75760' 00:19:52.087 14:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 75760 00:19:52.087 14:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 75760 00:19:53.461 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:19:53.461 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:53.461 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:53.461 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:53.461 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=75885 00:19:53.461 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:53.461 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 75885 00:19:53.461 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 75885 ']' 00:19:53.461 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:53.461 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:53.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:53.461 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:53.461 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:53.461 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:53.461 [2024-11-06 14:24:21.039432] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:19:53.461 [2024-11-06 14:24:21.039561] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:53.720 [2024-11-06 14:24:21.226573] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:53.979 [2024-11-06 14:24:21.379706] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:53.979 [2024-11-06 14:24:21.379773] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:53.979 [2024-11-06 14:24:21.379793] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:53.979 [2024-11-06 14:24:21.379818] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:53.979 [2024-11-06 14:24:21.379849] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:53.979 [2024-11-06 14:24:21.381118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:54.237 [2024-11-06 14:24:21.634037] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:54.237 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:54.237 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:54.237 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:54.237 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:54.237 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:54.496 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:54.496 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:19:54.496 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.496 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:54.496 [2024-11-06 14:24:21.935183] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:54.496 malloc0 00:19:54.496 [2024-11-06 14:24:22.000388] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:54.496 [2024-11-06 14:24:22.000713] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:54.496 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.496 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=75917 00:19:54.496 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:54.496 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 75917 /var/tmp/bdevperf.sock 00:19:54.496 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 75917 ']' 00:19:54.496 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:54.496 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:54.496 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:54.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:54.496 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:54.496 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:54.754 [2024-11-06 14:24:22.131912] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:19:54.754 [2024-11-06 14:24:22.132057] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75917 ] 00:19:54.754 [2024-11-06 14:24:22.300403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:55.012 [2024-11-06 14:24:22.428188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:55.012 [2024-11-06 14:24:22.637872] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:55.600 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:55.600 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:55.600 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.jaVnDvSk3q 00:19:55.600 14:24:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:55.874 [2024-11-06 14:24:23.362821] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:55.874 nvme0n1 00:19:55.874 14:24:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:56.132 Running I/O for 1 seconds... 00:19:57.068 3291.00 IOPS, 12.86 MiB/s 00:19:57.068 Latency(us) 00:19:57.068 [2024-11-06T14:24:24.703Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:57.068 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:57.068 Verification LBA range: start 0x0 length 0x2000 00:19:57.068 nvme0n1 : 1.03 3306.13 12.91 0.00 0.00 38099.84 11159.54 25898.56 00:19:57.068 [2024-11-06T14:24:24.703Z] =================================================================================================================== 00:19:57.068 [2024-11-06T14:24:24.703Z] Total : 3306.13 12.91 0.00 0.00 38099.84 11159.54 25898.56 00:19:57.068 { 00:19:57.068 "results": [ 00:19:57.068 { 00:19:57.068 "job": "nvme0n1", 00:19:57.068 "core_mask": "0x2", 00:19:57.068 "workload": "verify", 00:19:57.068 "status": "finished", 00:19:57.068 "verify_range": { 00:19:57.068 "start": 0, 00:19:57.068 "length": 8192 00:19:57.068 }, 00:19:57.068 "queue_depth": 128, 00:19:57.068 "io_size": 4096, 00:19:57.068 "runtime": 1.034441, 00:19:57.068 "iops": 3306.133457587238, 00:19:57.068 "mibps": 12.914583818700148, 00:19:57.068 "io_failed": 0, 00:19:57.068 "io_timeout": 0, 00:19:57.068 "avg_latency_us": 38099.84359238122, 00:19:57.068 "min_latency_us": 11159.543775100401, 00:19:57.068 "max_latency_us": 25898.563855421686 00:19:57.068 } 00:19:57.068 ], 00:19:57.068 "core_count": 1 00:19:57.068 } 00:19:57.068 14:24:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:19:57.068 14:24:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.068 14:24:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:57.328 14:24:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.328 14:24:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:19:57.328 "subsystems": [ 00:19:57.328 { 00:19:57.328 "subsystem": "keyring", 00:19:57.328 "config": [ 00:19:57.328 { 00:19:57.328 "method": "keyring_file_add_key", 00:19:57.328 "params": { 00:19:57.328 "name": "key0", 00:19:57.328 "path": "/tmp/tmp.jaVnDvSk3q" 00:19:57.328 } 00:19:57.328 } 00:19:57.328 ] 00:19:57.328 }, 00:19:57.328 { 00:19:57.328 "subsystem": "iobuf", 00:19:57.328 "config": [ 00:19:57.328 { 00:19:57.328 "method": "iobuf_set_options", 00:19:57.328 "params": { 00:19:57.328 "small_pool_count": 8192, 00:19:57.328 "large_pool_count": 1024, 00:19:57.328 "small_bufsize": 8192, 00:19:57.328 "large_bufsize": 135168, 00:19:57.328 "enable_numa": false 00:19:57.328 } 00:19:57.328 } 00:19:57.328 ] 00:19:57.328 }, 00:19:57.328 { 00:19:57.328 "subsystem": "sock", 00:19:57.328 "config": [ 00:19:57.328 { 00:19:57.328 "method": "sock_set_default_impl", 00:19:57.328 "params": { 00:19:57.328 "impl_name": "uring" 00:19:57.328 } 00:19:57.328 }, 00:19:57.328 { 00:19:57.328 "method": "sock_impl_set_options", 00:19:57.328 "params": { 00:19:57.328 "impl_name": "ssl", 00:19:57.328 "recv_buf_size": 4096, 00:19:57.328 "send_buf_size": 4096, 00:19:57.328 "enable_recv_pipe": true, 00:19:57.328 "enable_quickack": false, 00:19:57.328 "enable_placement_id": 0, 00:19:57.328 "enable_zerocopy_send_server": true, 00:19:57.328 "enable_zerocopy_send_client": false, 00:19:57.328 "zerocopy_threshold": 0, 00:19:57.328 "tls_version": 0, 00:19:57.328 "enable_ktls": false 00:19:57.328 } 00:19:57.328 }, 00:19:57.328 { 00:19:57.328 "method": "sock_impl_set_options", 00:19:57.328 "params": { 00:19:57.328 "impl_name": "posix", 00:19:57.328 "recv_buf_size": 2097152, 00:19:57.328 "send_buf_size": 2097152, 00:19:57.328 "enable_recv_pipe": true, 00:19:57.328 "enable_quickack": false, 00:19:57.328 "enable_placement_id": 0, 00:19:57.328 "enable_zerocopy_send_server": true, 00:19:57.328 "enable_zerocopy_send_client": false, 00:19:57.328 "zerocopy_threshold": 0, 00:19:57.328 "tls_version": 0, 00:19:57.328 "enable_ktls": false 00:19:57.328 } 00:19:57.328 }, 00:19:57.328 { 00:19:57.328 "method": "sock_impl_set_options", 00:19:57.329 "params": { 00:19:57.329 "impl_name": "uring", 00:19:57.329 "recv_buf_size": 2097152, 00:19:57.329 "send_buf_size": 2097152, 00:19:57.329 "enable_recv_pipe": true, 00:19:57.329 "enable_quickack": false, 00:19:57.329 "enable_placement_id": 0, 00:19:57.329 "enable_zerocopy_send_server": false, 00:19:57.329 "enable_zerocopy_send_client": false, 00:19:57.329 "zerocopy_threshold": 0, 00:19:57.329 "tls_version": 0, 00:19:57.329 "enable_ktls": false 00:19:57.329 } 00:19:57.329 } 00:19:57.329 ] 00:19:57.329 }, 00:19:57.329 { 00:19:57.329 "subsystem": "vmd", 00:19:57.329 "config": [] 00:19:57.329 }, 00:19:57.329 { 00:19:57.329 "subsystem": "accel", 00:19:57.329 "config": [ 00:19:57.329 { 00:19:57.329 "method": "accel_set_options", 00:19:57.329 "params": { 00:19:57.329 "small_cache_size": 128, 00:19:57.329 "large_cache_size": 16, 00:19:57.329 "task_count": 2048, 00:19:57.329 "sequence_count": 2048, 00:19:57.329 "buf_count": 2048 00:19:57.329 } 00:19:57.329 } 00:19:57.329 ] 00:19:57.329 }, 00:19:57.329 { 00:19:57.329 "subsystem": "bdev", 00:19:57.329 "config": [ 00:19:57.329 { 00:19:57.329 "method": "bdev_set_options", 00:19:57.329 "params": { 00:19:57.329 "bdev_io_pool_size": 65535, 00:19:57.329 "bdev_io_cache_size": 256, 00:19:57.329 "bdev_auto_examine": true, 00:19:57.329 "iobuf_small_cache_size": 128, 00:19:57.329 "iobuf_large_cache_size": 16 00:19:57.329 } 00:19:57.329 }, 00:19:57.329 { 00:19:57.329 "method": "bdev_raid_set_options", 00:19:57.329 "params": { 00:19:57.329 "process_window_size_kb": 1024, 00:19:57.329 "process_max_bandwidth_mb_sec": 0 00:19:57.329 } 00:19:57.329 }, 00:19:57.329 { 00:19:57.329 "method": "bdev_iscsi_set_options", 00:19:57.329 "params": { 00:19:57.329 "timeout_sec": 30 00:19:57.329 } 00:19:57.329 }, 00:19:57.329 { 00:19:57.329 "method": "bdev_nvme_set_options", 00:19:57.329 "params": { 00:19:57.329 "action_on_timeout": "none", 00:19:57.329 "timeout_us": 0, 00:19:57.329 "timeout_admin_us": 0, 00:19:57.329 "keep_alive_timeout_ms": 10000, 00:19:57.329 "arbitration_burst": 0, 00:19:57.329 "low_priority_weight": 0, 00:19:57.329 "medium_priority_weight": 0, 00:19:57.329 "high_priority_weight": 0, 00:19:57.329 "nvme_adminq_poll_period_us": 10000, 00:19:57.329 "nvme_ioq_poll_period_us": 0, 00:19:57.329 "io_queue_requests": 0, 00:19:57.329 "delay_cmd_submit": true, 00:19:57.329 "transport_retry_count": 4, 00:19:57.329 "bdev_retry_count": 3, 00:19:57.329 "transport_ack_timeout": 0, 00:19:57.329 "ctrlr_loss_timeout_sec": 0, 00:19:57.329 "reconnect_delay_sec": 0, 00:19:57.329 "fast_io_fail_timeout_sec": 0, 00:19:57.329 "disable_auto_failback": false, 00:19:57.329 "generate_uuids": false, 00:19:57.329 "transport_tos": 0, 00:19:57.329 "nvme_error_stat": false, 00:19:57.329 "rdma_srq_size": 0, 00:19:57.329 "io_path_stat": false, 00:19:57.329 "allow_accel_sequence": false, 00:19:57.329 "rdma_max_cq_size": 0, 00:19:57.329 "rdma_cm_event_timeout_ms": 0, 00:19:57.329 "dhchap_digests": [ 00:19:57.329 "sha256", 00:19:57.329 "sha384", 00:19:57.329 "sha512" 00:19:57.329 ], 00:19:57.329 "dhchap_dhgroups": [ 00:19:57.329 "null", 00:19:57.329 "ffdhe2048", 00:19:57.329 "ffdhe3072", 00:19:57.329 "ffdhe4096", 00:19:57.329 "ffdhe6144", 00:19:57.329 "ffdhe8192" 00:19:57.329 ] 00:19:57.329 } 00:19:57.329 }, 00:19:57.329 { 00:19:57.329 "method": "bdev_nvme_set_hotplug", 00:19:57.329 "params": { 00:19:57.329 "period_us": 100000, 00:19:57.329 "enable": false 00:19:57.329 } 00:19:57.329 }, 00:19:57.329 { 00:19:57.329 "method": "bdev_malloc_create", 00:19:57.329 "params": { 00:19:57.329 "name": "malloc0", 00:19:57.329 "num_blocks": 8192, 00:19:57.329 "block_size": 4096, 00:19:57.329 "physical_block_size": 4096, 00:19:57.329 "uuid": "a06292e7-aa08-4145-8d59-19528fc21a69", 00:19:57.329 "optimal_io_boundary": 0, 00:19:57.329 "md_size": 0, 00:19:57.329 "dif_type": 0, 00:19:57.329 "dif_is_head_of_md": false, 00:19:57.329 "dif_pi_format": 0 00:19:57.329 } 00:19:57.329 }, 00:19:57.329 { 00:19:57.329 "method": "bdev_wait_for_examine" 00:19:57.329 } 00:19:57.329 ] 00:19:57.329 }, 00:19:57.329 { 00:19:57.329 "subsystem": "nbd", 00:19:57.329 "config": [] 00:19:57.329 }, 00:19:57.329 { 00:19:57.329 "subsystem": "scheduler", 00:19:57.329 "config": [ 00:19:57.329 { 00:19:57.329 "method": "framework_set_scheduler", 00:19:57.329 "params": { 00:19:57.329 "name": "static" 00:19:57.329 } 00:19:57.329 } 00:19:57.329 ] 00:19:57.329 }, 00:19:57.329 { 00:19:57.329 "subsystem": "nvmf", 00:19:57.329 "config": [ 00:19:57.329 { 00:19:57.329 "method": "nvmf_set_config", 00:19:57.329 "params": { 00:19:57.329 "discovery_filter": "match_any", 00:19:57.329 "admin_cmd_passthru": { 00:19:57.329 "identify_ctrlr": false 00:19:57.329 }, 00:19:57.329 "dhchap_digests": [ 00:19:57.329 "sha256", 00:19:57.329 "sha384", 00:19:57.329 "sha512" 00:19:57.329 ], 00:19:57.329 "dhchap_dhgroups": [ 00:19:57.329 "null", 00:19:57.329 "ffdhe2048", 00:19:57.329 "ffdhe3072", 00:19:57.329 "ffdhe4096", 00:19:57.329 "ffdhe6144", 00:19:57.329 "ffdhe8192" 00:19:57.329 ] 00:19:57.329 } 00:19:57.329 }, 00:19:57.329 { 00:19:57.329 "method": "nvmf_set_max_subsystems", 00:19:57.329 "params": { 00:19:57.329 "max_subsystems": 1024 00:19:57.329 } 00:19:57.329 }, 00:19:57.329 { 00:19:57.329 "method": "nvmf_set_crdt", 00:19:57.329 "params": { 00:19:57.329 "crdt1": 0, 00:19:57.329 "crdt2": 0, 00:19:57.329 "crdt3": 0 00:19:57.329 } 00:19:57.329 }, 00:19:57.329 { 00:19:57.329 "method": "nvmf_create_transport", 00:19:57.329 "params": { 00:19:57.329 "trtype": "TCP", 00:19:57.329 "max_queue_depth": 128, 00:19:57.329 "max_io_qpairs_per_ctrlr": 127, 00:19:57.329 "in_capsule_data_size": 4096, 00:19:57.329 "max_io_size": 131072, 00:19:57.329 "io_unit_size": 131072, 00:19:57.329 "max_aq_depth": 128, 00:19:57.329 "num_shared_buffers": 511, 00:19:57.329 "buf_cache_size": 4294967295, 00:19:57.329 "dif_insert_or_strip": false, 00:19:57.329 "zcopy": false, 00:19:57.329 "c2h_success": false, 00:19:57.329 "sock_priority": 0, 00:19:57.329 "abort_timeout_sec": 1, 00:19:57.329 "ack_timeout": 0, 00:19:57.329 "data_wr_pool_size": 0 00:19:57.329 } 00:19:57.329 }, 00:19:57.329 { 00:19:57.329 "method": "nvmf_create_subsystem", 00:19:57.329 "params": { 00:19:57.329 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:57.329 "allow_any_host": false, 00:19:57.329 "serial_number": "00000000000000000000", 00:19:57.329 "model_number": "SPDK bdev Controller", 00:19:57.329 "max_namespaces": 32, 00:19:57.329 "min_cntlid": 1, 00:19:57.329 "max_cntlid": 65519, 00:19:57.329 "ana_reporting": false 00:19:57.329 } 00:19:57.329 }, 00:19:57.329 { 00:19:57.329 "method": "nvmf_subsystem_add_host", 00:19:57.329 "params": { 00:19:57.329 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:57.329 "host": "nqn.2016-06.io.spdk:host1", 00:19:57.329 "psk": "key0" 00:19:57.329 } 00:19:57.329 }, 00:19:57.329 { 00:19:57.329 "method": "nvmf_subsystem_add_ns", 00:19:57.329 "params": { 00:19:57.329 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:57.329 "namespace": { 00:19:57.329 "nsid": 1, 00:19:57.329 "bdev_name": "malloc0", 00:19:57.329 "nguid": "A06292E7AA0841458D5919528FC21A69", 00:19:57.329 "uuid": "a06292e7-aa08-4145-8d59-19528fc21a69", 00:19:57.329 "no_auto_visible": false 00:19:57.329 } 00:19:57.329 } 00:19:57.329 }, 00:19:57.329 { 00:19:57.329 "method": "nvmf_subsystem_add_listener", 00:19:57.329 "params": { 00:19:57.329 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:57.329 "listen_address": { 00:19:57.329 "trtype": "TCP", 00:19:57.329 "adrfam": "IPv4", 00:19:57.329 "traddr": "10.0.0.3", 00:19:57.329 "trsvcid": "4420" 00:19:57.329 }, 00:19:57.329 "secure_channel": false, 00:19:57.329 "sock_impl": "ssl" 00:19:57.329 } 00:19:57.329 } 00:19:57.329 ] 00:19:57.329 } 00:19:57.329 ] 00:19:57.329 }' 00:19:57.329 14:24:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:57.589 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:19:57.589 "subsystems": [ 00:19:57.589 { 00:19:57.589 "subsystem": "keyring", 00:19:57.589 "config": [ 00:19:57.589 { 00:19:57.589 "method": "keyring_file_add_key", 00:19:57.589 "params": { 00:19:57.589 "name": "key0", 00:19:57.589 "path": "/tmp/tmp.jaVnDvSk3q" 00:19:57.589 } 00:19:57.589 } 00:19:57.589 ] 00:19:57.589 }, 00:19:57.589 { 00:19:57.589 "subsystem": "iobuf", 00:19:57.589 "config": [ 00:19:57.589 { 00:19:57.589 "method": "iobuf_set_options", 00:19:57.589 "params": { 00:19:57.589 "small_pool_count": 8192, 00:19:57.589 "large_pool_count": 1024, 00:19:57.589 "small_bufsize": 8192, 00:19:57.589 "large_bufsize": 135168, 00:19:57.589 "enable_numa": false 00:19:57.589 } 00:19:57.589 } 00:19:57.589 ] 00:19:57.589 }, 00:19:57.589 { 00:19:57.589 "subsystem": "sock", 00:19:57.589 "config": [ 00:19:57.589 { 00:19:57.589 "method": "sock_set_default_impl", 00:19:57.589 "params": { 00:19:57.589 "impl_name": "uring" 00:19:57.589 } 00:19:57.589 }, 00:19:57.589 { 00:19:57.589 "method": "sock_impl_set_options", 00:19:57.589 "params": { 00:19:57.589 "impl_name": "ssl", 00:19:57.589 "recv_buf_size": 4096, 00:19:57.589 "send_buf_size": 4096, 00:19:57.589 "enable_recv_pipe": true, 00:19:57.589 "enable_quickack": false, 00:19:57.589 "enable_placement_id": 0, 00:19:57.589 "enable_zerocopy_send_server": true, 00:19:57.589 "enable_zerocopy_send_client": false, 00:19:57.589 "zerocopy_threshold": 0, 00:19:57.589 "tls_version": 0, 00:19:57.589 "enable_ktls": false 00:19:57.589 } 00:19:57.589 }, 00:19:57.589 { 00:19:57.589 "method": "sock_impl_set_options", 00:19:57.589 "params": { 00:19:57.589 "impl_name": "posix", 00:19:57.589 "recv_buf_size": 2097152, 00:19:57.589 "send_buf_size": 2097152, 00:19:57.589 "enable_recv_pipe": true, 00:19:57.589 "enable_quickack": false, 00:19:57.589 "enable_placement_id": 0, 00:19:57.589 "enable_zerocopy_send_server": true, 00:19:57.589 "enable_zerocopy_send_client": false, 00:19:57.589 "zerocopy_threshold": 0, 00:19:57.589 "tls_version": 0, 00:19:57.589 "enable_ktls": false 00:19:57.589 } 00:19:57.589 }, 00:19:57.589 { 00:19:57.589 "method": "sock_impl_set_options", 00:19:57.589 "params": { 00:19:57.589 "impl_name": "uring", 00:19:57.589 "recv_buf_size": 2097152, 00:19:57.589 "send_buf_size": 2097152, 00:19:57.589 "enable_recv_pipe": true, 00:19:57.589 "enable_quickack": false, 00:19:57.589 "enable_placement_id": 0, 00:19:57.589 "enable_zerocopy_send_server": false, 00:19:57.589 "enable_zerocopy_send_client": false, 00:19:57.589 "zerocopy_threshold": 0, 00:19:57.589 "tls_version": 0, 00:19:57.589 "enable_ktls": false 00:19:57.589 } 00:19:57.589 } 00:19:57.589 ] 00:19:57.589 }, 00:19:57.589 { 00:19:57.589 "subsystem": "vmd", 00:19:57.589 "config": [] 00:19:57.589 }, 00:19:57.589 { 00:19:57.589 "subsystem": "accel", 00:19:57.589 "config": [ 00:19:57.589 { 00:19:57.589 "method": "accel_set_options", 00:19:57.589 "params": { 00:19:57.589 "small_cache_size": 128, 00:19:57.589 "large_cache_size": 16, 00:19:57.589 "task_count": 2048, 00:19:57.589 "sequence_count": 2048, 00:19:57.589 "buf_count": 2048 00:19:57.589 } 00:19:57.589 } 00:19:57.589 ] 00:19:57.589 }, 00:19:57.589 { 00:19:57.589 "subsystem": "bdev", 00:19:57.589 "config": [ 00:19:57.589 { 00:19:57.589 "method": "bdev_set_options", 00:19:57.589 "params": { 00:19:57.589 "bdev_io_pool_size": 65535, 00:19:57.589 "bdev_io_cache_size": 256, 00:19:57.589 "bdev_auto_examine": true, 00:19:57.589 "iobuf_small_cache_size": 128, 00:19:57.589 "iobuf_large_cache_size": 16 00:19:57.589 } 00:19:57.589 }, 00:19:57.589 { 00:19:57.589 "method": "bdev_raid_set_options", 00:19:57.589 "params": { 00:19:57.589 "process_window_size_kb": 1024, 00:19:57.589 "process_max_bandwidth_mb_sec": 0 00:19:57.589 } 00:19:57.589 }, 00:19:57.589 { 00:19:57.589 "method": "bdev_iscsi_set_options", 00:19:57.589 "params": { 00:19:57.589 "timeout_sec": 30 00:19:57.589 } 00:19:57.589 }, 00:19:57.589 { 00:19:57.589 "method": "bdev_nvme_set_options", 00:19:57.589 "params": { 00:19:57.589 "action_on_timeout": "none", 00:19:57.589 "timeout_us": 0, 00:19:57.589 "timeout_admin_us": 0, 00:19:57.589 "keep_alive_timeout_ms": 10000, 00:19:57.589 "arbitration_burst": 0, 00:19:57.589 "low_priority_weight": 0, 00:19:57.589 "medium_priority_weight": 0, 00:19:57.589 "high_priority_weight": 0, 00:19:57.589 "nvme_adminq_poll_period_us": 10000, 00:19:57.589 "nvme_ioq_poll_period_us": 0, 00:19:57.589 "io_queue_requests": 512, 00:19:57.589 "delay_cmd_submit": true, 00:19:57.589 "transport_retry_count": 4, 00:19:57.590 "bdev_retry_count": 3, 00:19:57.590 "transport_ack_timeout": 0, 00:19:57.590 "ctrlr_loss_timeout_sec": 0, 00:19:57.590 "reconnect_delay_sec": 0, 00:19:57.590 "fast_io_fail_timeout_sec": 0, 00:19:57.590 "disable_auto_failback": false, 00:19:57.590 "generate_uuids": false, 00:19:57.590 "transport_tos": 0, 00:19:57.590 "nvme_error_stat": false, 00:19:57.590 "rdma_srq_size": 0, 00:19:57.590 "io_path_stat": false, 00:19:57.590 "allow_accel_sequence": false, 00:19:57.590 "rdma_max_cq_size": 0, 00:19:57.590 "rdma_cm_event_timeout_ms": 0, 00:19:57.590 "dhchap_digests": [ 00:19:57.590 "sha256", 00:19:57.590 "sha384", 00:19:57.590 "sha512" 00:19:57.590 ], 00:19:57.590 "dhchap_dhgroups": [ 00:19:57.590 "null", 00:19:57.590 "ffdhe2048", 00:19:57.590 "ffdhe3072", 00:19:57.590 "ffdhe4096", 00:19:57.590 "ffdhe6144", 00:19:57.590 "ffdhe8192" 00:19:57.590 ] 00:19:57.590 } 00:19:57.590 }, 00:19:57.590 { 00:19:57.590 "method": "bdev_nvme_attach_controller", 00:19:57.590 "params": { 00:19:57.590 "name": "nvme0", 00:19:57.590 "trtype": "TCP", 00:19:57.590 "adrfam": "IPv4", 00:19:57.590 "traddr": "10.0.0.3", 00:19:57.590 "trsvcid": "4420", 00:19:57.590 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:57.590 "prchk_reftag": false, 00:19:57.590 "prchk_guard": false, 00:19:57.590 "ctrlr_loss_timeout_sec": 0, 00:19:57.590 "reconnect_delay_sec": 0, 00:19:57.590 "fast_io_fail_timeout_sec": 0, 00:19:57.590 "psk": "key0", 00:19:57.590 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:57.590 "hdgst": false, 00:19:57.590 "ddgst": false, 00:19:57.590 "multipath": "multipath" 00:19:57.590 } 00:19:57.590 }, 00:19:57.590 { 00:19:57.590 "method": "bdev_nvme_set_hotplug", 00:19:57.590 "params": { 00:19:57.590 "period_us": 100000, 00:19:57.590 "enable": false 00:19:57.590 } 00:19:57.590 }, 00:19:57.590 { 00:19:57.590 "method": "bdev_enable_histogram", 00:19:57.590 "params": { 00:19:57.590 "name": "nvme0n1", 00:19:57.590 "enable": true 00:19:57.590 } 00:19:57.590 }, 00:19:57.590 { 00:19:57.590 "method": "bdev_wait_for_examine" 00:19:57.590 } 00:19:57.590 ] 00:19:57.590 }, 00:19:57.590 { 00:19:57.590 "subsystem": "nbd", 00:19:57.590 "config": [] 00:19:57.590 } 00:19:57.590 ] 00:19:57.590 }' 00:19:57.590 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 75917 00:19:57.590 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 75917 ']' 00:19:57.590 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 75917 00:19:57.590 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:57.590 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:57.590 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75917 00:19:57.590 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:57.590 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:57.590 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75917' 00:19:57.590 killing process with pid 75917 00:19:57.590 Received shutdown signal, test time was about 1.000000 seconds 00:19:57.590 00:19:57.590 Latency(us) 00:19:57.590 [2024-11-06T14:24:25.225Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:57.590 [2024-11-06T14:24:25.225Z] =================================================================================================================== 00:19:57.590 [2024-11-06T14:24:25.225Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:57.590 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 75917 00:19:57.590 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 75917 00:19:58.997 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 75885 00:19:58.997 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 75885 ']' 00:19:58.998 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 75885 00:19:58.998 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:58.998 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:58.998 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75885 00:19:58.998 killing process with pid 75885 00:19:58.998 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:58.998 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:58.998 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75885' 00:19:58.998 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 75885 00:19:58.998 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 75885 00:20:00.377 14:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:20:00.377 14:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:00.377 14:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:00.377 14:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:20:00.377 "subsystems": [ 00:20:00.377 { 00:20:00.377 "subsystem": "keyring", 00:20:00.377 "config": [ 00:20:00.377 { 00:20:00.377 "method": "keyring_file_add_key", 00:20:00.377 "params": { 00:20:00.377 "name": "key0", 00:20:00.377 "path": "/tmp/tmp.jaVnDvSk3q" 00:20:00.377 } 00:20:00.377 } 00:20:00.377 ] 00:20:00.377 }, 00:20:00.377 { 00:20:00.377 "subsystem": "iobuf", 00:20:00.377 "config": [ 00:20:00.377 { 00:20:00.377 "method": "iobuf_set_options", 00:20:00.377 "params": { 00:20:00.377 "small_pool_count": 8192, 00:20:00.377 "large_pool_count": 1024, 00:20:00.377 "small_bufsize": 8192, 00:20:00.377 "large_bufsize": 135168, 00:20:00.377 "enable_numa": false 00:20:00.377 } 00:20:00.377 } 00:20:00.377 ] 00:20:00.377 }, 00:20:00.377 { 00:20:00.377 "subsystem": "sock", 00:20:00.377 "config": [ 00:20:00.377 { 00:20:00.377 "method": "sock_set_default_impl", 00:20:00.377 "params": { 00:20:00.377 "impl_name": "uring" 00:20:00.377 } 00:20:00.377 }, 00:20:00.377 { 00:20:00.377 "method": "sock_impl_set_options", 00:20:00.377 "params": { 00:20:00.377 "impl_name": "ssl", 00:20:00.377 "recv_buf_size": 4096, 00:20:00.377 "send_buf_size": 4096, 00:20:00.377 "enable_recv_pipe": true, 00:20:00.377 "enable_quickack": false, 00:20:00.377 "enable_placement_id": 0, 00:20:00.377 "enable_zerocopy_send_server": true, 00:20:00.377 "enable_zerocopy_send_client": false, 00:20:00.377 "zerocopy_threshold": 0, 00:20:00.377 "tls_version": 0, 00:20:00.377 "enable_ktls": false 00:20:00.378 } 00:20:00.378 }, 00:20:00.378 { 00:20:00.378 "method": "sock_impl_set_options", 00:20:00.378 "params": { 00:20:00.378 "impl_name": "posix", 00:20:00.378 "recv_buf_size": 2097152, 00:20:00.378 "send_buf_size": 2097152, 00:20:00.378 "enable_recv_pipe": true, 00:20:00.378 "enable_quickack": false, 00:20:00.378 "enable_placement_id": 0, 00:20:00.378 "enable_zerocopy_send_server": true, 00:20:00.378 "enable_zerocopy_send_client": false, 00:20:00.378 "zerocopy_threshold": 0, 00:20:00.378 "tls_version": 0, 00:20:00.378 "enable_ktls": false 00:20:00.378 } 00:20:00.378 }, 00:20:00.378 { 00:20:00.378 "method": "sock_impl_set_options", 00:20:00.378 "params": { 00:20:00.378 "impl_name": "uring", 00:20:00.378 "recv_buf_size": 2097152, 00:20:00.378 "send_buf_size": 2097152, 00:20:00.378 "enable_recv_pipe": true, 00:20:00.378 "enable_quickack": false, 00:20:00.378 "enable_placement_id": 0, 00:20:00.378 "enable_zerocopy_send_server": false, 00:20:00.378 "enable_zerocopy_send_client": false, 00:20:00.378 "zerocopy_threshold": 0, 00:20:00.378 "tls_version": 0, 00:20:00.378 "enable_ktls": false 00:20:00.378 } 00:20:00.378 } 00:20:00.378 ] 00:20:00.378 }, 00:20:00.378 { 00:20:00.378 "subsystem": "vmd", 00:20:00.378 "config": [] 00:20:00.378 }, 00:20:00.378 { 00:20:00.378 "subsystem": "accel", 00:20:00.378 "config": [ 00:20:00.378 { 00:20:00.378 "method": "accel_set_options", 00:20:00.378 "params": { 00:20:00.378 "small_cache_size": 128, 00:20:00.378 "large_cache_size": 16, 00:20:00.378 "task_count": 2048, 00:20:00.378 "sequence_count": 2048, 00:20:00.378 "buf_count": 2048 00:20:00.378 } 00:20:00.378 } 00:20:00.378 ] 00:20:00.378 }, 00:20:00.378 { 00:20:00.378 "subsystem": "bdev", 00:20:00.378 "config": [ 00:20:00.378 { 00:20:00.378 "method": "bdev_set_options", 00:20:00.378 "params": { 00:20:00.378 "bdev_io_pool_size": 65535, 00:20:00.378 "bdev_io_cache_size": 256, 00:20:00.378 "bdev_auto_examine": true, 00:20:00.378 "iobuf_small_cache_size": 128, 00:20:00.378 "iobuf_large_cache_size": 16 00:20:00.378 } 00:20:00.378 }, 00:20:00.378 { 00:20:00.378 "method": "bdev_raid_set_options", 00:20:00.378 "params": { 00:20:00.378 "process_window_size_kb": 1024, 00:20:00.378 "process_max_bandwidth_mb_sec": 0 00:20:00.378 } 00:20:00.378 }, 00:20:00.378 { 00:20:00.378 "method": "bdev_iscsi_set_options", 00:20:00.378 "params": { 00:20:00.378 "timeout_sec": 30 00:20:00.378 } 00:20:00.378 }, 00:20:00.378 { 00:20:00.378 "method": "bdev_nvme_set_options", 00:20:00.378 "params": { 00:20:00.378 "action_on_timeout": "none", 00:20:00.378 "timeout_us": 0, 00:20:00.378 "timeout_admin_us": 0, 00:20:00.378 "keep_alive_timeout_ms": 10000, 00:20:00.378 "arbitration_burst": 0, 00:20:00.378 "low_priority_weight": 0, 00:20:00.378 "medium_priority_weight": 0, 00:20:00.378 "high_priority_weight": 0, 00:20:00.378 "nvme_adminq_poll_period_us": 10000, 00:20:00.378 "nvme_ioq_poll_period_us": 0, 00:20:00.378 "io_queue_requests": 0, 00:20:00.378 "delay_cmd_submit": true, 00:20:00.378 "transport_retry_count": 4, 00:20:00.378 "bdev_retry_count": 3, 00:20:00.378 "transport_ack_timeout": 0, 00:20:00.378 "ctrlr_loss_timeout_sec": 0, 00:20:00.378 "reconnect_delay_sec": 0, 00:20:00.378 "fast_io_fail_timeout_sec": 0, 00:20:00.378 "disable_auto_failback": false, 00:20:00.378 "generate_uuids": false, 00:20:00.378 "transport_tos": 0, 00:20:00.378 "nvme_error_stat": false, 00:20:00.378 "rdma_srq_size": 0, 00:20:00.378 "io_path_stat": false, 00:20:00.378 "allow_accel_sequence": false, 00:20:00.378 "rdma_max_cq_size": 0, 00:20:00.378 "rdma_cm_event_timeout_ms": 0, 00:20:00.378 "dhchap_digests": [ 00:20:00.378 "sha256", 00:20:00.378 "sha384", 00:20:00.378 "sha512" 00:20:00.378 ], 00:20:00.378 "dhchap_dhgroups": [ 00:20:00.378 "null", 00:20:00.378 "ffdhe2048", 00:20:00.378 "ffdhe3072", 00:20:00.378 "ffdhe4096", 00:20:00.378 "ffdhe6144", 00:20:00.378 "ffdhe8192" 00:20:00.378 ] 00:20:00.378 } 00:20:00.378 }, 00:20:00.378 { 00:20:00.378 "method": "bdev_nvme_set_hotplug", 00:20:00.378 "params": { 00:20:00.378 "period_us": 100000, 00:20:00.378 "enable": false 00:20:00.378 } 00:20:00.378 }, 00:20:00.378 { 00:20:00.378 "method": "bdev_malloc_create", 00:20:00.378 "params": { 00:20:00.378 "name": "malloc0", 00:20:00.378 "num_blocks": 8192, 00:20:00.378 "block_size": 4096, 00:20:00.378 "physical_block_size": 4096, 00:20:00.378 "uuid": "a06292e7-aa08-4145-8d59-19528fc21a69", 00:20:00.378 "optimal_io_boundary": 0, 00:20:00.378 "md_size": 0, 00:20:00.378 "dif_type": 0, 00:20:00.378 "dif_is_head_of_md": false, 00:20:00.378 "dif_pi_format": 0 00:20:00.378 } 00:20:00.378 }, 00:20:00.378 { 00:20:00.378 "method": "bdev_wait_for_examine" 00:20:00.378 } 00:20:00.378 ] 00:20:00.378 }, 00:20:00.378 { 00:20:00.378 "subsystem": "nbd", 00:20:00.378 "config": [] 00:20:00.378 }, 00:20:00.378 { 00:20:00.378 "subsystem": "scheduler", 00:20:00.378 "config": [ 00:20:00.378 { 00:20:00.378 "method": "framework_set_scheduler", 00:20:00.378 "params": { 00:20:00.378 "name": "static" 00:20:00.378 } 00:20:00.378 } 00:20:00.378 ] 00:20:00.378 }, 00:20:00.378 { 00:20:00.378 "subsystem": "nvmf", 00:20:00.378 "config": [ 00:20:00.378 { 00:20:00.378 "method": "nvmf_set_config", 00:20:00.378 "params": { 00:20:00.378 "discovery_filter": "match_any", 00:20:00.378 "admin_cmd_passthru": { 00:20:00.378 "identify_ctrlr": false 00:20:00.378 }, 00:20:00.378 "dhchap_digests": [ 00:20:00.378 "sha256", 00:20:00.378 "sha384", 00:20:00.378 "sha512" 00:20:00.378 ], 00:20:00.378 "dhchap_dhgroups": [ 00:20:00.378 "null", 00:20:00.378 "ffdhe2048", 00:20:00.378 "ffdhe3072", 00:20:00.378 "ffdhe4096", 00:20:00.378 "ffdhe6144", 00:20:00.378 "ffdhe8192" 00:20:00.378 ] 00:20:00.378 } 00:20:00.378 }, 00:20:00.378 { 00:20:00.378 "method": "nvmf_set_max_subsystems", 00:20:00.378 "params": { 00:20:00.378 "max_subsystems": 1024 00:20:00.378 } 00:20:00.378 }, 00:20:00.378 { 00:20:00.378 "method": "nvmf_set_crdt", 00:20:00.378 "params": { 00:20:00.378 "crdt1": 0, 00:20:00.378 "crdt2": 0, 00:20:00.378 "crdt3": 0 00:20:00.378 } 00:20:00.378 }, 00:20:00.378 { 00:20:00.378 "method": "nvmf_create_transport", 00:20:00.378 "params": { 00:20:00.378 "trtype": "TCP", 00:20:00.378 "max_queue_depth": 128, 00:20:00.378 "max_io_qpairs_per_ctrlr": 127, 00:20:00.378 "in_capsule_data_size": 4096, 00:20:00.378 "max_io_size": 131072, 00:20:00.378 "io_unit_size": 131072, 00:20:00.378 "max_aq_depth": 128, 00:20:00.378 "num_shared_buffers": 511, 00:20:00.378 "buf_cache_size": 4294967295, 00:20:00.378 "dif_insert_or_strip": false, 00:20:00.378 "zcopy": false, 00:20:00.378 "c2h_success": false, 00:20:00.378 "sock_priority": 0, 00:20:00.378 "abort_timeout_sec": 1, 00:20:00.378 "ack_timeout": 0, 00:20:00.378 "data_wr_pool_size": 0 00:20:00.378 } 00:20:00.378 }, 00:20:00.378 { 00:20:00.378 "method": "nvmf_create_subsystem", 00:20:00.378 "params": { 00:20:00.378 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:00.378 "allow_any_host": false, 00:20:00.378 "serial_number": "00000000000000000000", 00:20:00.378 "model_number": "SPDK bdev Controller", 00:20:00.378 "max_namespaces": 32, 00:20:00.378 "min_cntlid": 1, 00:20:00.378 "max_cntlid": 65519, 00:20:00.378 "ana_reporting": false 00:20:00.378 } 00:20:00.378 }, 00:20:00.378 { 00:20:00.378 "method": "nvmf_subsystem_add_host", 00:20:00.378 "params": { 00:20:00.378 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:00.378 "host": "nqn.2016-06.io.spdk:host1", 00:20:00.378 "psk": "key0" 00:20:00.378 } 00:20:00.378 }, 00:20:00.378 { 00:20:00.378 "method": "nvmf_subsystem_add_ns", 00:20:00.378 "params": { 00:20:00.378 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:00.378 "namespace": { 00:20:00.378 "nsid": 1, 00:20:00.378 "bdev_name": "malloc0", 00:20:00.378 "nguid": "A06292E7AA0841458D5919528FC21A69", 00:20:00.378 "uuid": "a06292e7-aa08-4145-8d59-19528fc21a69", 00:20:00.378 "no_auto_visible": false 00:20:00.378 } 00:20:00.378 } 00:20:00.378 }, 00:20:00.378 { 00:20:00.378 "method": "nvmf_subsystem_add_listener", 00:20:00.378 "params": { 00:20:00.378 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:00.378 "listen_address": { 00:20:00.378 "trtype": "TCP", 00:20:00.378 "adrfam": "IPv4", 00:20:00.378 "traddr": "10.0.0.3", 00:20:00.378 "trsvcid": "4420" 00:20:00.378 }, 00:20:00.378 "secure_channel": false, 00:20:00.378 "sock_impl": "ssl" 00:20:00.378 } 00:20:00.378 } 00:20:00.378 ] 00:20:00.378 } 00:20:00.378 ] 00:20:00.378 }' 00:20:00.378 14:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:00.378 14:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:20:00.378 14:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=76002 00:20:00.378 14:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 76002 00:20:00.378 14:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 76002 ']' 00:20:00.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:00.379 14:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:00.379 14:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:00.379 14:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:00.379 14:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:00.379 14:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:00.379 [2024-11-06 14:24:27.743499] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:20:00.379 [2024-11-06 14:24:27.743619] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:00.379 [2024-11-06 14:24:27.930101] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:00.638 [2024-11-06 14:24:28.074356] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:00.638 [2024-11-06 14:24:28.074441] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:00.638 [2024-11-06 14:24:28.074461] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:00.638 [2024-11-06 14:24:28.074485] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:00.638 [2024-11-06 14:24:28.074501] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:00.638 [2024-11-06 14:24:28.075992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:00.896 [2024-11-06 14:24:28.442457] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:01.155 [2024-11-06 14:24:28.665614] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:01.155 [2024-11-06 14:24:28.697498] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:01.155 [2024-11-06 14:24:28.697811] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:01.155 14:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:01.155 14:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:01.155 14:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:01.155 14:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:01.155 14:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:01.415 14:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:01.415 14:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=76034 00:20:01.415 14:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 76034 /var/tmp/bdevperf.sock 00:20:01.415 14:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 76034 ']' 00:20:01.415 14:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:01.415 14:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:20:01.415 14:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:01.415 14:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:20:01.415 "subsystems": [ 00:20:01.415 { 00:20:01.415 "subsystem": "keyring", 00:20:01.415 "config": [ 00:20:01.415 { 00:20:01.415 "method": "keyring_file_add_key", 00:20:01.415 "params": { 00:20:01.415 "name": "key0", 00:20:01.415 "path": "/tmp/tmp.jaVnDvSk3q" 00:20:01.415 } 00:20:01.415 } 00:20:01.415 ] 00:20:01.415 }, 00:20:01.415 { 00:20:01.415 "subsystem": "iobuf", 00:20:01.415 "config": [ 00:20:01.415 { 00:20:01.415 "method": "iobuf_set_options", 00:20:01.415 "params": { 00:20:01.415 "small_pool_count": 8192, 00:20:01.415 "large_pool_count": 1024, 00:20:01.415 "small_bufsize": 8192, 00:20:01.415 "large_bufsize": 135168, 00:20:01.415 "enable_numa": false 00:20:01.415 } 00:20:01.415 } 00:20:01.415 ] 00:20:01.415 }, 00:20:01.415 { 00:20:01.415 "subsystem": "sock", 00:20:01.415 "config": [ 00:20:01.415 { 00:20:01.415 "method": "sock_set_default_impl", 00:20:01.415 "params": { 00:20:01.415 "impl_name": "uring" 00:20:01.415 } 00:20:01.415 }, 00:20:01.415 { 00:20:01.415 "method": "sock_impl_set_options", 00:20:01.415 "params": { 00:20:01.415 "impl_name": "ssl", 00:20:01.415 "recv_buf_size": 4096, 00:20:01.415 "send_buf_size": 4096, 00:20:01.415 "enable_recv_pipe": true, 00:20:01.415 "enable_quickack": false, 00:20:01.415 "enable_placement_id": 0, 00:20:01.415 "enable_zerocopy_send_server": true, 00:20:01.415 "enable_zerocopy_send_client": false, 00:20:01.415 "zerocopy_threshold": 0, 00:20:01.415 "tls_version": 0, 00:20:01.415 "enable_ktls": false 00:20:01.415 } 00:20:01.415 }, 00:20:01.415 { 00:20:01.415 "method": "sock_impl_set_options", 00:20:01.415 "params": { 00:20:01.415 "impl_name": "posix", 00:20:01.415 "recv_buf_size": 2097152, 00:20:01.415 "send_buf_size": 2097152, 00:20:01.415 "enable_recv_pipe": true, 00:20:01.415 "enable_quickack": false, 00:20:01.415 "enable_placement_id": 0, 00:20:01.415 "enable_zerocopy_send_server": true, 00:20:01.415 "enable_zerocopy_send_client": false, 00:20:01.415 "zerocopy_threshold": 0, 00:20:01.415 "tls_version": 0, 00:20:01.415 "enable_ktls": false 00:20:01.415 } 00:20:01.415 }, 00:20:01.415 { 00:20:01.415 "method": "sock_impl_set_options", 00:20:01.415 "params": { 00:20:01.415 "impl_name": "uring", 00:20:01.415 "recv_buf_size": 2097152, 00:20:01.415 "send_buf_size": 2097152, 00:20:01.415 "enable_recv_pipe": true, 00:20:01.415 "enable_quickack": false, 00:20:01.415 "enable_placement_id": 0, 00:20:01.416 "enable_zerocopy_send_server": false, 00:20:01.416 "enable_zerocopy_send_client": false, 00:20:01.416 "zerocopy_threshold": 0, 00:20:01.416 "tls_version": 0, 00:20:01.416 "enable_ktls": false 00:20:01.416 } 00:20:01.416 } 00:20:01.416 ] 00:20:01.416 }, 00:20:01.416 { 00:20:01.416 "subsystem": "vmd", 00:20:01.416 "config": [] 00:20:01.416 }, 00:20:01.416 { 00:20:01.416 "subsystem": "accel", 00:20:01.416 "config": [ 00:20:01.416 { 00:20:01.416 "method": "accel_set_options", 00:20:01.416 "params": { 00:20:01.416 "small_cache_size": 128, 00:20:01.416 "large_cache_size": 16, 00:20:01.416 "task_count": 2048, 00:20:01.416 "sequence_count": 2048, 00:20:01.416 "buf_count": 2048 00:20:01.416 } 00:20:01.416 } 00:20:01.416 ] 00:20:01.416 }, 00:20:01.416 { 00:20:01.416 "subsystem": "bdev", 00:20:01.416 "config": [ 00:20:01.416 { 00:20:01.416 "method": "bdev_set_options", 00:20:01.416 "params": { 00:20:01.416 "bdev_io_pool_size": 65535, 00:20:01.416 "bdev_io_cache_size": 256, 00:20:01.416 "bdev_auto_examine": true, 00:20:01.416 "iobuf_small_cache_size": 128, 00:20:01.416 "iobuf_large_cache_size": 16 00:20:01.416 } 00:20:01.416 }, 00:20:01.416 { 00:20:01.416 "method": "bdev_raid_set_options", 00:20:01.416 "params": { 00:20:01.416 "process_window_size_kb": 1024, 00:20:01.416 "process_max_bandwidth_mb_sec": 0 00:20:01.416 } 00:20:01.416 }, 00:20:01.416 { 00:20:01.416 "method": "bdev_iscsi_set_options", 00:20:01.416 "params": { 00:20:01.416 "timeout_sec": 30 00:20:01.416 } 00:20:01.416 }, 00:20:01.416 { 00:20:01.416 "method": "bdev_nvme_set_options", 00:20:01.416 "params": { 00:20:01.416 "action_on_timeout": "none", 00:20:01.416 "timeout_us": 0, 00:20:01.416 "timeout_admin_us": 0, 00:20:01.416 "keep_alive_timeout_ms": 10000, 00:20:01.416 "arbitration_burst": 0, 00:20:01.416 "low_priority_weight": 0, 00:20:01.416 "medium_priority_weight": 0, 00:20:01.416 "high_priority_weight": 0, 00:20:01.416 "nvme_adminq_poll_period_us": 10000, 00:20:01.416 "nvme_ioq_poll_period_us": 0, 00:20:01.416 "io_queue_requests": 512, 00:20:01.416 "delay_cmd_submit": true, 00:20:01.416 "transport_retry_count": 4, 00:20:01.416 "bdev_retry_count": 3, 00:20:01.416 "transport_ack_timeout": 0, 00:20:01.416 "ctrlr_loss_timeout_sec": 0, 00:20:01.416 "reconnect_delay_sec": 0, 00:20:01.416 "fast_io_fail_timeout_sec": 0, 00:20:01.416 "disable_auto_failback": false, 00:20:01.416 "generate_uuids": false, 00:20:01.416 "transport_tos": 0, 00:20:01.416 "nvme_error_stat": false, 00:20:01.416 "rdma_srq_size": 0, 00:20:01.416 "io_path_stat": false, 00:20:01.416 "allow_accel_sequence": false, 00:20:01.416 "rdma_max_cq_size": 0, 00:20:01.416 "rdma_cm_event_timeout_ms": 0, 00:20:01.416 "dhchap_digests": [ 00:20:01.416 "sha256", 00:20:01.416 "sha384", 00:20:01.416 "sha512" 00:20:01.416 ], 00:20:01.416 "dhchap_dhgroups": [ 00:20:01.416 "null", 00:20:01.416 "ffdhe2048", 00:20:01.416 "ffdhe3072", 00:20:01.416 "ffdhe4096", 00:20:01.416 "ffdhe6144", 00:20:01.416 "ffdhe8192" 00:20:01.416 ] 00:20:01.416 } 00:20:01.416 }, 00:20:01.416 { 00:20:01.416 "method": "bdev_nvme_attach_controller", 00:20:01.416 "params": { 00:20:01.416 "name": "nvme0", 00:20:01.416 "trtype": "TCP", 00:20:01.416 "adrfam": "IPv4", 00:20:01.416 "traddr": "10.0.0.3", 00:20:01.416 "trsvcid": "4420", 00:20:01.416 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:01.416 "prchk_reftag": false, 00:20:01.416 "prchk_guard": false, 00:20:01.416 "ctrlr_loss_timeout_sec": 0, 00:20:01.416 "reconnect_delay_sec": 0, 00:20:01.416 "fast_io_fail_timeout_sec": 0, 00:20:01.416 "psk": "key0", 00:20:01.416 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:01.416 "hdgst": false, 00:20:01.416 "ddgst": false, 00:20:01.416 "multipath": "multipath" 00:20:01.416 } 00:20:01.416 }, 00:20:01.416 { 00:20:01.416 "method": "bdev_nvme_set_hotplug", 00:20:01.416 "params": { 00:20:01.416 "period_us": 100000, 00:20:01.416 "enable": false 00:20:01.416 } 00:20:01.416 }, 00:20:01.416 { 00:20:01.416 "method": "bdev_enable_histogram", 00:20:01.416 "params": { 00:20:01.416 "name": "nvme0n1", 00:20:01.416 "enable": true 00:20:01.416 } 00:20:01.416 }, 00:20:01.416 { 00:20:01.416 "method": "bdev_wait_for_examine" 00:20:01.416 } 00:20:01.416 ] 00:20:01.416 }, 00:20:01.416 { 00:20:01.416 "subsystem": "nbd", 00:20:01.416 "config": [] 00:20:01.416 } 00:20:01.416 ] 00:20:01.416 }' 00:20:01.416 14:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:01.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:01.416 14:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:01.416 14:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:01.416 [2024-11-06 14:24:28.896775] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:20:01.416 [2024-11-06 14:24:28.897236] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76034 ] 00:20:01.675 [2024-11-06 14:24:29.080798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:01.675 [2024-11-06 14:24:29.202543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:01.935 [2024-11-06 14:24:29.493573] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:02.193 [2024-11-06 14:24:29.627691] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:02.193 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:02.193 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:02.193 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:02.193 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:20:02.451 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.451 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:02.709 Running I/O for 1 seconds... 00:20:03.645 3328.00 IOPS, 13.00 MiB/s 00:20:03.645 Latency(us) 00:20:03.645 [2024-11-06T14:24:31.280Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:03.645 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:03.645 Verification LBA range: start 0x0 length 0x2000 00:20:03.645 nvme0n1 : 1.04 3333.60 13.02 0.00 0.00 37925.01 10896.35 25266.89 00:20:03.645 [2024-11-06T14:24:31.280Z] =================================================================================================================== 00:20:03.645 [2024-11-06T14:24:31.280Z] Total : 3333.60 13.02 0.00 0.00 37925.01 10896.35 25266.89 00:20:03.645 { 00:20:03.645 "results": [ 00:20:03.645 { 00:20:03.645 "job": "nvme0n1", 00:20:03.645 "core_mask": "0x2", 00:20:03.645 "workload": "verify", 00:20:03.645 "status": "finished", 00:20:03.645 "verify_range": { 00:20:03.645 "start": 0, 00:20:03.645 "length": 8192 00:20:03.645 }, 00:20:03.645 "queue_depth": 128, 00:20:03.645 "io_size": 4096, 00:20:03.645 "runtime": 1.036717, 00:20:03.645 "iops": 3333.6002014050123, 00:20:03.645 "mibps": 13.02187578673833, 00:20:03.645 "io_failed": 0, 00:20:03.645 "io_timeout": 0, 00:20:03.645 "avg_latency_us": 37925.01204819277, 00:20:03.645 "min_latency_us": 10896.346987951807, 00:20:03.645 "max_latency_us": 25266.89156626506 00:20:03.645 } 00:20:03.645 ], 00:20:03.645 "core_count": 1 00:20:03.645 } 00:20:03.645 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:20:03.645 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:20:03.645 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:20:03.645 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # type=--id 00:20:03.645 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@811 -- # id=0 00:20:03.645 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:20:03.645 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:03.645 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:20:03.646 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:20:03.646 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@822 -- # for n in $shm_files 00:20:03.646 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:03.646 nvmf_trace.0 00:20:03.905 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # return 0 00:20:03.905 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 76034 00:20:03.905 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 76034 ']' 00:20:03.905 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 76034 00:20:03.905 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:03.905 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:03.905 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 76034 00:20:03.905 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:03.905 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:03.905 killing process with pid 76034 00:20:03.905 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 76034' 00:20:03.905 Received shutdown signal, test time was about 1.000000 seconds 00:20:03.905 00:20:03.905 Latency(us) 00:20:03.905 [2024-11-06T14:24:31.540Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:03.905 [2024-11-06T14:24:31.540Z] =================================================================================================================== 00:20:03.905 [2024-11-06T14:24:31.540Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:03.905 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 76034 00:20:03.905 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 76034 00:20:04.841 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:20:04.841 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:04.841 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:20:04.841 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:04.841 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:20:04.841 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:04.841 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:04.841 rmmod nvme_tcp 00:20:05.100 rmmod nvme_fabrics 00:20:05.100 rmmod nvme_keyring 00:20:05.100 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:05.100 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:20:05.100 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:20:05.100 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 76002 ']' 00:20:05.100 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 76002 00:20:05.100 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 76002 ']' 00:20:05.100 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 76002 00:20:05.100 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:05.100 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:05.100 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 76002 00:20:05.100 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:05.100 killing process with pid 76002 00:20:05.100 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:05.100 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 76002' 00:20:05.100 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 76002 00:20:05.100 14:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 76002 00:20:06.478 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:06.478 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:06.478 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:06.478 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:20:06.478 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:20:06.478 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:06.478 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:20:06.478 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:06.478 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:06.478 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:06.478 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:06.478 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:06.478 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:06.478 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:06.478 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:06.478 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:06.478 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:06.478 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:06.478 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:06.737 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:06.737 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:06.737 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:06.737 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:06.737 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:06.737 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:06.737 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:06.737 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:20:06.737 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.Hc9ZNPRISj /tmp/tmp.bNkjGCrbb9 /tmp/tmp.jaVnDvSk3q 00:20:06.737 00:20:06.737 real 1m48.866s 00:20:06.737 user 2m45.553s 00:20:06.737 sys 0m34.209s 00:20:06.737 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:06.737 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:06.737 ************************************ 00:20:06.737 END TEST nvmf_tls 00:20:06.737 ************************************ 00:20:06.737 14:24:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:06.737 14:24:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:06.737 14:24:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:06.737 14:24:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:06.737 ************************************ 00:20:06.737 START TEST nvmf_fips 00:20:06.737 ************************************ 00:20:06.737 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:06.997 * Looking for test storage... 00:20:06.997 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:20:06.997 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:06.997 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lcov --version 00:20:06.997 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:06.997 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:06.997 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:06.997 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:06.997 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:06.997 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:06.997 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:06.997 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:06.997 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:06.997 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:20:06.997 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:20:06.997 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:20:06.997 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:06.997 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:06.997 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:20:06.997 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:06.997 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:06.997 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:06.997 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:06.997 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:06.997 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:06.997 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:06.997 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:20:06.997 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:20:06.997 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:06.997 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:20:06.997 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:20:06.997 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:06.997 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:06.997 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:20:06.997 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:06.997 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:06.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:06.997 --rc genhtml_branch_coverage=1 00:20:06.997 --rc genhtml_function_coverage=1 00:20:06.997 --rc genhtml_legend=1 00:20:06.997 --rc geninfo_all_blocks=1 00:20:06.997 --rc geninfo_unexecuted_blocks=1 00:20:06.997 00:20:06.997 ' 00:20:06.997 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:06.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:06.997 --rc genhtml_branch_coverage=1 00:20:06.997 --rc genhtml_function_coverage=1 00:20:06.997 --rc genhtml_legend=1 00:20:06.997 --rc geninfo_all_blocks=1 00:20:06.997 --rc geninfo_unexecuted_blocks=1 00:20:06.997 00:20:06.997 ' 00:20:06.997 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:06.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:06.997 --rc genhtml_branch_coverage=1 00:20:06.997 --rc genhtml_function_coverage=1 00:20:06.997 --rc genhtml_legend=1 00:20:06.997 --rc geninfo_all_blocks=1 00:20:06.997 --rc geninfo_unexecuted_blocks=1 00:20:06.997 00:20:06.997 ' 00:20:06.997 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:06.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:06.997 --rc genhtml_branch_coverage=1 00:20:06.998 --rc genhtml_function_coverage=1 00:20:06.998 --rc genhtml_legend=1 00:20:06.998 --rc geninfo_all_blocks=1 00:20:06.998 --rc geninfo_unexecuted_blocks=1 00:20:06.998 00:20:06.998 ' 00:20:06.998 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:06.998 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:20:06.998 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:06.998 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:06.998 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:06.998 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:06.998 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:06.998 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:06.998 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:06.998 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:06.998 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:06.998 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:06.998 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:20:06.998 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:20:06.998 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:06.998 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:06.998 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:06.998 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:06.998 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:06.998 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:20:06.998 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:06.998 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:06.998 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:06.998 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.998 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.998 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.998 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:20:06.998 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.998 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:20:06.998 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:06.998 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:06.998 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:06.998 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:06.998 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:06.998 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:06.998 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:06.998 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:06.998 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:06.998 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:06.998 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:06.998 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:20:06.998 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:20:06.998 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:20:06.998 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:20:06.998 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:20:06.998 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:20:06.998 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:06.998 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:06.998 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:06.998 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:06.998 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:06.998 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:06.998 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:20:06.998 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:20:06.998 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:20:06.998 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:06.998 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:06.998 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:20:06.998 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:06.998 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:06.998 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:20:06.998 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:06.998 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:06.998 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:06.998 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:20:06.998 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:20:06.998 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:06.998 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:06.998 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:07.258 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:20:07.258 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:07.258 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:07.258 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:20:07.258 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:07.258 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:07.258 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:07.258 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:07.258 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:07.258 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:07.258 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:20:07.258 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:20:07.258 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:07.258 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:20:07.258 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:20:07.258 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:07.258 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:20:07.258 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:20:07.258 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:07.258 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:20:07.258 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:07.258 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:07.258 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:20:07.258 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:20:07.258 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:20:07.258 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:20:07.258 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:20:07.258 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:20:07.258 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:07.258 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:20:07.258 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:20:07.258 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:20:07.258 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:20:07.258 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:20:07.258 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:20:07.258 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:07.258 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:20:07.258 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:20:07.258 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:07.258 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:20:07.258 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:07.258 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:20:07.258 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:20:07.258 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:07.258 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:20:07.258 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:07.258 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:20:07.258 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:20:07.258 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:20:07.258 Error setting digest 00:20:07.258 4082A1607F7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:20:07.258 4082A1607F7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:20:07.258 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:20:07.258 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:07.258 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:07.258 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:07.258 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:20:07.258 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:07.258 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:07.258 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:07.258 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:07.258 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:07.259 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:07.259 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:07.259 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:07.259 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:07.259 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:07.259 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:07.259 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:07.259 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:07.259 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:07.259 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:07.259 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:07.259 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:07.259 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:07.259 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:07.259 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:07.259 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:07.259 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:07.259 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:07.259 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:07.259 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:07.259 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:07.259 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:07.259 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:07.259 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:07.259 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:07.259 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:07.259 Cannot find device "nvmf_init_br" 00:20:07.259 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:20:07.259 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:07.259 Cannot find device "nvmf_init_br2" 00:20:07.259 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:20:07.259 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:07.259 Cannot find device "nvmf_tgt_br" 00:20:07.259 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:20:07.259 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:07.259 Cannot find device "nvmf_tgt_br2" 00:20:07.259 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:20:07.259 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:07.259 Cannot find device "nvmf_init_br" 00:20:07.259 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:20:07.259 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:07.518 Cannot find device "nvmf_init_br2" 00:20:07.518 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:20:07.518 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:07.518 Cannot find device "nvmf_tgt_br" 00:20:07.518 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:20:07.518 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:07.518 Cannot find device "nvmf_tgt_br2" 00:20:07.518 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:20:07.518 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:07.518 Cannot find device "nvmf_br" 00:20:07.518 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:20:07.518 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:07.518 Cannot find device "nvmf_init_if" 00:20:07.518 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:20:07.518 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:07.518 Cannot find device "nvmf_init_if2" 00:20:07.518 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:20:07.518 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:07.518 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:07.518 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:20:07.518 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:07.518 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:07.518 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:20:07.518 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:07.518 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:07.518 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:07.518 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:07.518 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:07.518 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:07.518 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:07.518 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:07.518 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:07.518 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:07.518 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:07.518 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:07.518 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:07.518 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:07.518 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:07.518 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:07.518 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:07.518 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:07.518 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:07.518 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:07.778 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:07.778 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:07.778 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:07.778 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:07.778 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:07.778 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:07.778 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:07.778 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:07.778 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:07.778 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:07.778 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:07.778 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:07.778 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:07.778 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:07.778 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.133 ms 00:20:07.778 00:20:07.778 --- 10.0.0.3 ping statistics --- 00:20:07.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:07.778 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:20:07.778 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:07.778 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:07.778 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.099 ms 00:20:07.778 00:20:07.778 --- 10.0.0.4 ping statistics --- 00:20:07.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:07.778 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:20:07.778 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:07.778 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:07.778 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.064 ms 00:20:07.778 00:20:07.778 --- 10.0.0.1 ping statistics --- 00:20:07.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:07.778 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:20:07.778 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:07.778 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:07.778 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.126 ms 00:20:07.778 00:20:07.778 --- 10.0.0.2 ping statistics --- 00:20:07.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:07.778 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:20:07.778 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:07.778 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@461 -- # return 0 00:20:07.778 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:07.778 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:07.778 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:07.778 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:07.778 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:07.778 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:07.778 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:07.778 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:20:07.778 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:07.778 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:07.778 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:07.778 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=76372 00:20:07.778 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:07.778 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 76372 00:20:07.778 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 76372 ']' 00:20:07.778 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:07.778 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:07.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:07.778 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:07.778 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:07.778 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:08.037 [2024-11-06 14:24:35.495452] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:20:08.037 [2024-11-06 14:24:35.495589] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:08.296 [2024-11-06 14:24:35.686060] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:08.296 [2024-11-06 14:24:35.835054] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:08.296 [2024-11-06 14:24:35.835123] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:08.296 [2024-11-06 14:24:35.835140] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:08.296 [2024-11-06 14:24:35.835151] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:08.296 [2024-11-06 14:24:35.835165] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:08.296 [2024-11-06 14:24:35.836574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:08.556 [2024-11-06 14:24:36.090891] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:08.815 14:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:08.815 14:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:20:08.815 14:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:08.815 14:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:08.815 14:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:08.815 14:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:08.815 14:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:20:08.815 14:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:08.815 14:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:20:08.815 14:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.hd0 00:20:08.815 14:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:08.815 14:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.hd0 00:20:08.815 14:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.hd0 00:20:08.815 14:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.hd0 00:20:08.815 14:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:09.074 [2024-11-06 14:24:36.681145] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:09.075 [2024-11-06 14:24:36.697074] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:09.075 [2024-11-06 14:24:36.697469] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:09.334 malloc0 00:20:09.334 14:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:09.334 14:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:09.334 14:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=76419 00:20:09.334 14:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 76419 /var/tmp/bdevperf.sock 00:20:09.334 14:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 76419 ']' 00:20:09.334 14:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:09.334 14:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:09.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:09.334 14:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:09.334 14:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:09.334 14:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:09.334 [2024-11-06 14:24:36.910628] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:20:09.334 [2024-11-06 14:24:36.910772] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76419 ] 00:20:09.594 [2024-11-06 14:24:37.097466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:09.853 [2024-11-06 14:24:37.249227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:10.112 [2024-11-06 14:24:37.499143] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:10.371 14:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:10.371 14:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:20:10.371 14:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.hd0 00:20:10.630 14:24:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:10.631 [2024-11-06 14:24:38.212571] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:10.923 TLSTESTn1 00:20:10.923 14:24:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:10.923 Running I/O for 10 seconds... 00:20:13.238 3712.00 IOPS, 14.50 MiB/s [2024-11-06T14:24:41.441Z] 3712.00 IOPS, 14.50 MiB/s [2024-11-06T14:24:42.839Z] 3769.00 IOPS, 14.72 MiB/s [2024-11-06T14:24:43.775Z] 3801.25 IOPS, 14.85 MiB/s [2024-11-06T14:24:44.711Z] 3803.40 IOPS, 14.86 MiB/s [2024-11-06T14:24:45.688Z] 3808.00 IOPS, 14.88 MiB/s [2024-11-06T14:24:46.625Z] 3815.00 IOPS, 14.90 MiB/s [2024-11-06T14:24:47.562Z] 3818.12 IOPS, 14.91 MiB/s [2024-11-06T14:24:48.539Z] 3830.44 IOPS, 14.96 MiB/s [2024-11-06T14:24:48.539Z] 3806.10 IOPS, 14.87 MiB/s 00:20:20.904 Latency(us) 00:20:20.904 [2024-11-06T14:24:48.539Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:20.904 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:20.904 Verification LBA range: start 0x0 length 0x2000 00:20:20.904 TLSTESTn1 : 10.02 3812.08 14.89 0.00 0.00 33524.48 6527.28 35163.09 00:20:20.904 [2024-11-06T14:24:48.539Z] =================================================================================================================== 00:20:20.904 [2024-11-06T14:24:48.539Z] Total : 3812.08 14.89 0.00 0.00 33524.48 6527.28 35163.09 00:20:20.904 { 00:20:20.904 "results": [ 00:20:20.904 { 00:20:20.904 "job": "TLSTESTn1", 00:20:20.904 "core_mask": "0x4", 00:20:20.904 "workload": "verify", 00:20:20.904 "status": "finished", 00:20:20.904 "verify_range": { 00:20:20.904 "start": 0, 00:20:20.904 "length": 8192 00:20:20.904 }, 00:20:20.904 "queue_depth": 128, 00:20:20.904 "io_size": 4096, 00:20:20.904 "runtime": 10.017371, 00:20:20.904 "iops": 3812.0780392380398, 00:20:20.904 "mibps": 14.890929840773593, 00:20:20.904 "io_failed": 0, 00:20:20.904 "io_timeout": 0, 00:20:20.904 "avg_latency_us": 33524.47934132634, 00:20:20.904 "min_latency_us": 6527.28032128514, 00:20:20.904 "max_latency_us": 35163.09076305221 00:20:20.904 } 00:20:20.904 ], 00:20:20.904 "core_count": 1 00:20:20.904 } 00:20:20.904 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:20:20.904 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:20:20.904 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # type=--id 00:20:20.904 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@811 -- # id=0 00:20:20.904 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:20:20.904 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:20.904 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:20:20.904 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:20:20.904 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@822 -- # for n in $shm_files 00:20:20.904 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:20.904 nvmf_trace.0 00:20:21.164 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # return 0 00:20:21.164 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 76419 00:20:21.164 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 76419 ']' 00:20:21.164 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 76419 00:20:21.164 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:20:21.164 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:21.164 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 76419 00:20:21.164 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:20:21.164 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:20:21.164 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 76419' 00:20:21.164 killing process with pid 76419 00:20:21.164 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 76419 00:20:21.164 Received shutdown signal, test time was about 10.000000 seconds 00:20:21.164 00:20:21.164 Latency(us) 00:20:21.164 [2024-11-06T14:24:48.799Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:21.164 [2024-11-06T14:24:48.799Z] =================================================================================================================== 00:20:21.164 [2024-11-06T14:24:48.799Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:21.164 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 76419 00:20:22.543 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:20:22.543 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:22.543 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:20:22.543 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:22.543 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:20:22.543 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:22.543 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:22.543 rmmod nvme_tcp 00:20:22.543 rmmod nvme_fabrics 00:20:22.543 rmmod nvme_keyring 00:20:22.543 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:22.543 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:20:22.543 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:20:22.543 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 76372 ']' 00:20:22.543 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 76372 00:20:22.543 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 76372 ']' 00:20:22.543 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 76372 00:20:22.543 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:20:22.543 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:22.543 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 76372 00:20:22.543 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:22.543 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:22.543 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 76372' 00:20:22.543 killing process with pid 76372 00:20:22.543 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 76372 00:20:22.543 14:24:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 76372 00:20:23.922 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:23.922 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:23.922 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:23.922 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:20:23.922 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:23.922 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:20:23.922 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:20:23.922 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:23.922 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:23.922 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:23.922 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:23.922 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:23.922 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:23.922 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:23.922 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:23.922 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:23.922 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:23.922 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:23.922 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:23.922 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:24.181 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:24.181 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:24.181 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:24.182 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:24.182 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:24.182 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:24.182 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:20:24.182 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.hd0 00:20:24.182 00:20:24.182 real 0m17.368s 00:20:24.182 user 0m22.796s 00:20:24.182 sys 0m6.885s 00:20:24.182 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:24.182 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:24.182 ************************************ 00:20:24.182 END TEST nvmf_fips 00:20:24.182 ************************************ 00:20:24.182 14:24:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:24.182 14:24:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:24.182 14:24:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:24.182 14:24:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:24.182 ************************************ 00:20:24.182 START TEST nvmf_control_msg_list 00:20:24.182 ************************************ 00:20:24.182 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:24.441 * Looking for test storage... 00:20:24.441 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:24.441 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:24.441 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lcov --version 00:20:24.441 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:24.442 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:24.442 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:24.442 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:24.442 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:24.442 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:20:24.442 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:20:24.442 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:20:24.442 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:20:24.442 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:20:24.442 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:20:24.442 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:20:24.442 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:24.442 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:20:24.442 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:20:24.442 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:24.442 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:24.442 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:20:24.442 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:20:24.442 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:24.442 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:20:24.442 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:20:24.442 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:20:24.442 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:20:24.442 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:24.442 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:20:24.442 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:20:24.442 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:24.442 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:24.442 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:20:24.442 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:24.442 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:24.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:24.442 --rc genhtml_branch_coverage=1 00:20:24.442 --rc genhtml_function_coverage=1 00:20:24.442 --rc genhtml_legend=1 00:20:24.442 --rc geninfo_all_blocks=1 00:20:24.442 --rc geninfo_unexecuted_blocks=1 00:20:24.442 00:20:24.442 ' 00:20:24.442 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:24.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:24.442 --rc genhtml_branch_coverage=1 00:20:24.442 --rc genhtml_function_coverage=1 00:20:24.442 --rc genhtml_legend=1 00:20:24.442 --rc geninfo_all_blocks=1 00:20:24.442 --rc geninfo_unexecuted_blocks=1 00:20:24.442 00:20:24.442 ' 00:20:24.442 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:24.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:24.442 --rc genhtml_branch_coverage=1 00:20:24.442 --rc genhtml_function_coverage=1 00:20:24.442 --rc genhtml_legend=1 00:20:24.442 --rc geninfo_all_blocks=1 00:20:24.442 --rc geninfo_unexecuted_blocks=1 00:20:24.442 00:20:24.442 ' 00:20:24.442 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:24.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:24.442 --rc genhtml_branch_coverage=1 00:20:24.442 --rc genhtml_function_coverage=1 00:20:24.442 --rc genhtml_legend=1 00:20:24.442 --rc geninfo_all_blocks=1 00:20:24.442 --rc geninfo_unexecuted_blocks=1 00:20:24.442 00:20:24.442 ' 00:20:24.442 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:24.442 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:20:24.442 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:24.442 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:24.442 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:24.442 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:24.442 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:24.442 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:24.442 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:24.442 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:24.442 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:24.442 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:24.442 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:20:24.442 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:20:24.442 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:24.442 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:24.442 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:24.442 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:24.442 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:24.442 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:20:24.442 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:24.442 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:24.442 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:24.442 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.442 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.442 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.442 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:20:24.442 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.442 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:20:24.442 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:24.442 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:24.442 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:24.442 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:24.442 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:24.442 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:24.442 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:24.442 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:24.442 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:24.442 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:24.442 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:20:24.443 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:24.443 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:24.443 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:24.443 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:24.443 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:24.443 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:24.443 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:24.443 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:24.443 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:24.443 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:24.443 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:24.443 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:24.443 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:24.443 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:24.443 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:24.443 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:24.443 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:24.443 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:24.443 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:24.443 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:24.443 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:24.443 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:24.443 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:24.443 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:24.443 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:24.443 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:24.443 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:24.443 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:24.443 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:24.443 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:24.443 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:24.443 Cannot find device "nvmf_init_br" 00:20:24.443 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:20:24.443 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:24.703 Cannot find device "nvmf_init_br2" 00:20:24.703 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:20:24.703 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:24.703 Cannot find device "nvmf_tgt_br" 00:20:24.703 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:20:24.703 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:24.703 Cannot find device "nvmf_tgt_br2" 00:20:24.703 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:20:24.703 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:24.703 Cannot find device "nvmf_init_br" 00:20:24.703 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:20:24.703 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:24.703 Cannot find device "nvmf_init_br2" 00:20:24.703 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:20:24.703 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:24.703 Cannot find device "nvmf_tgt_br" 00:20:24.703 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:20:24.703 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:24.703 Cannot find device "nvmf_tgt_br2" 00:20:24.703 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:20:24.703 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:24.703 Cannot find device "nvmf_br" 00:20:24.703 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:20:24.703 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:24.703 Cannot find device "nvmf_init_if" 00:20:24.703 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:20:24.703 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:24.703 Cannot find device "nvmf_init_if2" 00:20:24.703 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:20:24.703 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:24.703 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:24.703 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:20:24.703 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:24.703 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:24.703 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:20:24.703 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:24.703 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:24.703 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:24.703 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:24.703 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:24.703 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:24.962 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:24.962 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:24.962 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:24.962 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:24.962 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:24.962 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:24.962 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:24.962 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:24.962 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:24.962 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:24.962 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:24.962 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:24.963 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:24.963 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:24.963 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:24.963 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:24.963 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:24.963 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:24.963 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:24.963 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:24.963 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:24.963 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:24.963 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:24.963 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:24.963 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:24.963 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:24.963 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:24.963 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:24.963 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.147 ms 00:20:24.963 00:20:24.963 --- 10.0.0.3 ping statistics --- 00:20:24.963 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:24.963 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:20:24.963 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:24.963 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:24.963 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.091 ms 00:20:24.963 00:20:24.963 --- 10.0.0.4 ping statistics --- 00:20:24.963 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:24.963 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:20:24.963 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:24.963 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:24.963 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:20:24.963 00:20:24.963 --- 10.0.0.1 ping statistics --- 00:20:24.963 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:24.963 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:20:24.963 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:24.963 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:24.963 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.117 ms 00:20:24.963 00:20:24.963 --- 10.0.0.2 ping statistics --- 00:20:24.963 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:24.963 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:20:24.963 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:24.963 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@461 -- # return 0 00:20:24.963 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:24.963 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:24.963 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:24.963 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:24.963 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:24.963 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:24.963 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:25.222 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:20:25.222 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:25.222 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:25.222 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:25.222 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=76827 00:20:25.222 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:25.222 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 76827 00:20:25.222 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@833 -- # '[' -z 76827 ']' 00:20:25.222 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:25.222 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:25.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:25.222 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:25.222 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:25.222 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:25.222 [2024-11-06 14:24:52.736407] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:20:25.222 [2024-11-06 14:24:52.737333] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:25.481 [2024-11-06 14:24:52.923657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:25.481 [2024-11-06 14:24:53.077479] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:25.481 [2024-11-06 14:24:53.077852] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:25.481 [2024-11-06 14:24:53.077952] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:25.481 [2024-11-06 14:24:53.078030] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:25.481 [2024-11-06 14:24:53.078089] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:25.481 [2024-11-06 14:24:53.079712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:25.740 [2024-11-06 14:24:53.337579] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:25.999 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:25.999 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@866 -- # return 0 00:20:25.999 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:25.999 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:25.999 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:25.999 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:25.999 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:20:25.999 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:20:25.999 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:20:25.999 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.999 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:25.999 [2024-11-06 14:24:53.628430] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:25.999 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.999 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:20:26.259 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.259 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:26.259 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.259 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:20:26.259 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.259 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:26.259 Malloc0 00:20:26.259 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.259 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:20:26.259 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.259 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:26.259 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.259 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:26.259 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.259 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:26.259 [2024-11-06 14:24:53.717256] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:26.259 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.259 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=76859 00:20:26.259 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:20:26.259 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=76860 00:20:26.259 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:20:26.259 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=76861 00:20:26.259 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:20:26.259 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 76859 00:20:26.518 [2024-11-06 14:24:53.977770] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:26.518 [2024-11-06 14:24:54.007595] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:26.518 [2024-11-06 14:24:54.018209] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:27.456 Initializing NVMe Controllers 00:20:27.456 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:20:27.456 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:20:27.456 Initialization complete. Launching workers. 00:20:27.456 ======================================================== 00:20:27.456 Latency(us) 00:20:27.456 Device Information : IOPS MiB/s Average min max 00:20:27.456 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3604.00 14.08 277.19 104.51 1675.54 00:20:27.456 ======================================================== 00:20:27.456 Total : 3604.00 14.08 277.19 104.51 1675.54 00:20:27.456 00:20:27.456 Initializing NVMe Controllers 00:20:27.456 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:20:27.456 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:20:27.456 Initialization complete. Launching workers. 00:20:27.456 ======================================================== 00:20:27.456 Latency(us) 00:20:27.456 Device Information : IOPS MiB/s Average min max 00:20:27.456 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3572.00 13.95 279.62 136.89 939.38 00:20:27.456 ======================================================== 00:20:27.456 Total : 3572.00 13.95 279.62 136.89 939.38 00:20:27.456 00:20:27.456 Initializing NVMe Controllers 00:20:27.456 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:20:27.456 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:20:27.456 Initialization complete. Launching workers. 00:20:27.456 ======================================================== 00:20:27.456 Latency(us) 00:20:27.456 Device Information : IOPS MiB/s Average min max 00:20:27.456 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 3618.00 14.13 276.07 104.60 693.09 00:20:27.456 ======================================================== 00:20:27.456 Total : 3618.00 14.13 276.07 104.60 693.09 00:20:27.456 00:20:27.456 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 76860 00:20:27.456 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 76861 00:20:27.456 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:27.456 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:20:27.456 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:27.456 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:20:27.715 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:27.715 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:20:27.715 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:27.715 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:27.715 rmmod nvme_tcp 00:20:27.715 rmmod nvme_fabrics 00:20:27.715 rmmod nvme_keyring 00:20:27.715 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:27.715 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:20:27.715 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:20:27.715 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 76827 ']' 00:20:27.715 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 76827 00:20:27.715 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@952 -- # '[' -z 76827 ']' 00:20:27.715 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # kill -0 76827 00:20:27.715 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # uname 00:20:27.715 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:27.715 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 76827 00:20:27.715 killing process with pid 76827 00:20:27.715 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:27.715 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:27.715 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@970 -- # echo 'killing process with pid 76827' 00:20:27.715 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@971 -- # kill 76827 00:20:27.715 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@976 -- # wait 76827 00:20:29.092 14:24:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:29.092 14:24:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:29.092 14:24:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:29.092 14:24:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:20:29.092 14:24:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:20:29.092 14:24:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:29.092 14:24:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:20:29.092 14:24:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:29.092 14:24:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:29.092 14:24:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:29.092 14:24:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:29.092 14:24:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:29.092 14:24:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:29.092 14:24:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:29.092 14:24:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:29.092 14:24:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:29.092 14:24:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:29.351 14:24:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:29.351 14:24:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:29.351 14:24:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:29.351 14:24:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:29.351 14:24:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:29.351 14:24:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:29.351 14:24:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:29.351 14:24:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:29.351 14:24:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:29.351 14:24:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:20:29.351 00:20:29.352 real 0m5.166s 00:20:29.352 user 0m6.793s 00:20:29.352 sys 0m2.092s 00:20:29.352 14:24:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:29.352 ************************************ 00:20:29.352 END TEST nvmf_control_msg_list 00:20:29.352 ************************************ 00:20:29.352 14:24:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:29.649 14:24:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:20:29.650 14:24:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:29.650 14:24:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:29.650 14:24:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:29.650 ************************************ 00:20:29.650 START TEST nvmf_wait_for_buf 00:20:29.650 ************************************ 00:20:29.650 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:20:29.650 * Looking for test storage... 00:20:29.650 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:29.650 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:29.650 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lcov --version 00:20:29.650 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:29.650 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:29.650 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:29.650 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:29.650 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:29.650 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:20:29.650 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:20:29.650 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:20:29.650 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:20:29.650 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:20:29.650 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:20:29.650 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:20:29.650 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:29.650 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:20:29.650 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:20:29.650 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:29.650 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:29.650 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:20:29.650 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:20:29.650 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:29.650 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:20:29.650 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:20:29.650 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:20:29.650 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:20:29.650 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:29.650 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:20:29.650 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:20:29.650 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:29.650 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:29.650 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:20:29.650 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:29.650 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:29.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:29.650 --rc genhtml_branch_coverage=1 00:20:29.650 --rc genhtml_function_coverage=1 00:20:29.650 --rc genhtml_legend=1 00:20:29.650 --rc geninfo_all_blocks=1 00:20:29.650 --rc geninfo_unexecuted_blocks=1 00:20:29.650 00:20:29.650 ' 00:20:29.650 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:29.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:29.650 --rc genhtml_branch_coverage=1 00:20:29.650 --rc genhtml_function_coverage=1 00:20:29.650 --rc genhtml_legend=1 00:20:29.650 --rc geninfo_all_blocks=1 00:20:29.650 --rc geninfo_unexecuted_blocks=1 00:20:29.650 00:20:29.650 ' 00:20:29.650 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:29.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:29.650 --rc genhtml_branch_coverage=1 00:20:29.650 --rc genhtml_function_coverage=1 00:20:29.650 --rc genhtml_legend=1 00:20:29.650 --rc geninfo_all_blocks=1 00:20:29.650 --rc geninfo_unexecuted_blocks=1 00:20:29.650 00:20:29.650 ' 00:20:29.650 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:29.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:29.650 --rc genhtml_branch_coverage=1 00:20:29.650 --rc genhtml_function_coverage=1 00:20:29.650 --rc genhtml_legend=1 00:20:29.650 --rc geninfo_all_blocks=1 00:20:29.650 --rc geninfo_unexecuted_blocks=1 00:20:29.650 00:20:29.650 ' 00:20:29.650 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:29.650 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:20:29.650 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:29.650 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:29.650 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:29.650 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:29.650 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:29.650 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:29.650 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:29.650 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:29.650 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:29.650 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:29.650 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:20:29.650 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:20:29.650 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:29.650 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:29.650 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:29.650 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:29.650 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:29.650 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:20:29.650 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:29.650 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:29.650 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:29.650 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.650 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.650 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.650 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:20:29.650 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.650 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:20:29.650 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:29.910 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:29.910 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:29.910 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:29.910 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:29.910 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:29.910 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:29.910 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:29.910 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:29.910 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:29.910 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:20:29.910 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:29.910 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:29.910 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:29.910 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:29.910 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:29.910 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:29.910 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:29.910 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:29.910 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:29.910 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:29.910 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:29.910 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:29.910 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:29.910 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:29.910 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:29.910 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:29.910 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:29.910 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:29.910 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:29.910 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:29.910 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:29.910 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:29.910 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:29.910 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:29.910 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:29.910 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:29.910 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:29.910 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:29.910 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:29.910 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:29.910 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:29.910 Cannot find device "nvmf_init_br" 00:20:29.910 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:20:29.910 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:29.910 Cannot find device "nvmf_init_br2" 00:20:29.910 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:20:29.910 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:29.910 Cannot find device "nvmf_tgt_br" 00:20:29.910 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:20:29.910 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:29.910 Cannot find device "nvmf_tgt_br2" 00:20:29.910 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:20:29.910 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:29.910 Cannot find device "nvmf_init_br" 00:20:29.910 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:20:29.910 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:29.910 Cannot find device "nvmf_init_br2" 00:20:29.910 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:20:29.910 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:29.910 Cannot find device "nvmf_tgt_br" 00:20:29.910 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:20:29.910 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:29.910 Cannot find device "nvmf_tgt_br2" 00:20:29.910 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:20:29.910 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:29.910 Cannot find device "nvmf_br" 00:20:29.910 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:20:29.910 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:29.910 Cannot find device "nvmf_init_if" 00:20:29.910 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:20:29.910 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:29.910 Cannot find device "nvmf_init_if2" 00:20:29.910 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:20:29.910 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:29.910 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:29.910 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:20:29.910 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:29.910 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:29.910 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:20:29.910 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:29.910 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:29.910 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:30.170 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:30.170 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:30.170 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:30.170 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:30.170 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:30.170 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:30.170 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:30.170 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:30.170 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:30.170 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:30.170 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:30.170 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:30.170 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:30.170 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:30.170 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:30.170 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:30.170 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:30.170 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:30.170 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:30.170 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:30.170 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:30.170 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:30.170 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:30.170 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:30.170 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:30.170 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:30.170 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:30.170 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:30.170 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:30.170 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:30.170 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:30.170 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.103 ms 00:20:30.170 00:20:30.170 --- 10.0.0.3 ping statistics --- 00:20:30.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:30.170 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:20:30.170 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:30.170 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:30.170 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.058 ms 00:20:30.170 00:20:30.170 --- 10.0.0.4 ping statistics --- 00:20:30.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:30.170 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:20:30.170 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:30.170 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:30.170 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:20:30.170 00:20:30.170 --- 10.0.0.1 ping statistics --- 00:20:30.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:30.170 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:20:30.170 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:30.429 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:30.429 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:20:30.429 00:20:30.429 --- 10.0.0.2 ping statistics --- 00:20:30.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:30.429 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:20:30.429 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:30.429 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@461 -- # return 0 00:20:30.429 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:30.429 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:30.429 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:30.429 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:30.429 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:30.429 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:30.429 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:30.429 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:20:30.430 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:30.430 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:30.430 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:30.430 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:20:30.430 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=77120 00:20:30.430 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 77120 00:20:30.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:30.430 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@833 -- # '[' -z 77120 ']' 00:20:30.430 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:30.430 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:30.430 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:30.430 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:30.430 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:30.430 [2024-11-06 14:24:57.993906] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:20:30.430 [2024-11-06 14:24:57.994039] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:30.688 [2024-11-06 14:24:58.183360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:30.947 [2024-11-06 14:24:58.329637] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:30.947 [2024-11-06 14:24:58.329706] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:30.947 [2024-11-06 14:24:58.329723] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:30.947 [2024-11-06 14:24:58.329761] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:30.947 [2024-11-06 14:24:58.329776] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:30.947 [2024-11-06 14:24:58.331260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:31.207 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:31.207 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@866 -- # return 0 00:20:31.207 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:31.207 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:31.207 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:31.466 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:31.466 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:20:31.466 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:20:31.466 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:20:31.466 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.466 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:31.466 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.466 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:20:31.466 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.466 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:31.466 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.466 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:20:31.466 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.466 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:31.466 [2024-11-06 14:24:59.057542] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:31.726 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.726 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:20:31.726 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.726 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:31.726 Malloc0 00:20:31.726 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.726 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:20:31.726 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.726 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:31.726 [2024-11-06 14:24:59.262509] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:31.726 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.726 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:20:31.726 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.726 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:31.726 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.726 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:20:31.726 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.726 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:31.726 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.726 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:31.726 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.726 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:31.726 [2024-11-06 14:24:59.294655] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:31.726 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.726 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:20:31.984 [2024-11-06 14:24:59.546041] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:33.359 Initializing NVMe Controllers 00:20:33.359 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:20:33.359 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:20:33.359 Initialization complete. Launching workers. 00:20:33.359 ======================================================== 00:20:33.359 Latency(us) 00:20:33.359 Device Information : IOPS MiB/s Average min max 00:20:33.359 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 499.71 62.46 8003.72 5995.15 10344.43 00:20:33.359 ======================================================== 00:20:33.359 Total : 499.71 62.46 8003.72 5995.15 10344.43 00:20:33.359 00:20:33.359 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:20:33.359 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:20:33.359 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.359 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:33.359 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.359 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=4750 00:20:33.359 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 4750 -eq 0 ]] 00:20:33.359 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:33.359 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:20:33.359 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:33.359 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:20:33.618 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:33.618 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:20:33.618 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:33.618 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:33.618 rmmod nvme_tcp 00:20:33.618 rmmod nvme_fabrics 00:20:33.618 rmmod nvme_keyring 00:20:33.618 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:33.618 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:20:33.618 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:20:33.618 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 77120 ']' 00:20:33.618 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 77120 00:20:33.618 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@952 -- # '[' -z 77120 ']' 00:20:33.618 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # kill -0 77120 00:20:33.618 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # uname 00:20:33.618 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:33.618 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 77120 00:20:33.618 killing process with pid 77120 00:20:33.618 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:33.618 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:33.618 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 77120' 00:20:33.618 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@971 -- # kill 77120 00:20:33.618 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@976 -- # wait 77120 00:20:34.997 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:34.997 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:34.997 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:34.997 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:20:34.997 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:20:34.997 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:20:34.997 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:34.997 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:34.998 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:34.998 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:34.998 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:34.998 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:34.998 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:34.998 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:34.998 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:34.998 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:34.998 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:34.998 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:34.998 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:34.998 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:34.998 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:34.998 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:34.998 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:34.998 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:34.998 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:34.998 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:34.998 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:20:34.998 00:20:34.998 real 0m5.579s 00:20:34.998 user 0m4.526s 00:20:34.998 sys 0m1.387s 00:20:34.998 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:34.998 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:34.998 ************************************ 00:20:34.998 END TEST nvmf_wait_for_buf 00:20:34.998 ************************************ 00:20:35.257 14:25:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:20:35.257 14:25:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:20:35.257 14:25:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:35.257 14:25:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:35.257 14:25:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:35.257 ************************************ 00:20:35.257 START TEST nvmf_fuzz 00:20:35.257 ************************************ 00:20:35.257 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:20:35.257 * Looking for test storage... 00:20:35.257 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:35.257 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:35.257 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:20:35.257 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:35.257 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:35.257 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:35.257 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:35.257 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:35.257 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:20:35.257 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:20:35.257 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:20:35.257 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:20:35.257 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:20:35.257 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:20:35.257 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:20:35.257 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:35.257 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:20:35.257 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:20:35.257 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:35.257 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:35.257 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:20:35.257 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:20:35.257 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:35.257 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:20:35.257 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:20:35.257 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:20:35.257 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:20:35.257 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:35.257 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:20:35.257 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:20:35.257 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:35.257 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:35.257 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:20:35.257 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:35.257 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:35.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:35.257 --rc genhtml_branch_coverage=1 00:20:35.257 --rc genhtml_function_coverage=1 00:20:35.257 --rc genhtml_legend=1 00:20:35.257 --rc geninfo_all_blocks=1 00:20:35.257 --rc geninfo_unexecuted_blocks=1 00:20:35.257 00:20:35.257 ' 00:20:35.257 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:35.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:35.257 --rc genhtml_branch_coverage=1 00:20:35.257 --rc genhtml_function_coverage=1 00:20:35.257 --rc genhtml_legend=1 00:20:35.257 --rc geninfo_all_blocks=1 00:20:35.257 --rc geninfo_unexecuted_blocks=1 00:20:35.257 00:20:35.257 ' 00:20:35.257 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:35.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:35.257 --rc genhtml_branch_coverage=1 00:20:35.257 --rc genhtml_function_coverage=1 00:20:35.257 --rc genhtml_legend=1 00:20:35.257 --rc geninfo_all_blocks=1 00:20:35.257 --rc geninfo_unexecuted_blocks=1 00:20:35.257 00:20:35.257 ' 00:20:35.257 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:35.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:35.257 --rc genhtml_branch_coverage=1 00:20:35.257 --rc genhtml_function_coverage=1 00:20:35.257 --rc genhtml_legend=1 00:20:35.257 --rc geninfo_all_blocks=1 00:20:35.257 --rc geninfo_unexecuted_blocks=1 00:20:35.257 00:20:35.257 ' 00:20:35.258 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:35.517 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:20:35.517 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:35.517 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:35.517 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:35.517 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:35.517 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:35.517 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:35.517 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:35.517 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:35.517 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:35.517 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:35.517 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:20:35.517 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:20:35.517 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:35.517 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:35.517 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:35.517 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:35.518 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:35.518 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:20:35.518 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:35.518 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:35.518 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:35.518 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.518 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.518 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.518 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:20:35.518 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.518 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:20:35.518 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:35.518 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:35.518 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:35.518 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:35.518 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:35.518 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:35.518 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:35.518 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:35.518 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:35.518 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:35.518 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:20:35.518 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:35.518 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:35.518 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:35.518 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:35.518 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:35.518 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:35.518 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:35.518 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:35.518 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:35.518 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:35.518 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:35.518 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:35.518 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:35.518 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:35.518 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:35.518 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:35.518 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:35.518 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:35.518 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:35.518 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:35.518 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:35.518 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:35.518 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:35.518 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:35.518 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:35.518 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:35.518 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:35.518 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:35.518 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:35.518 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:35.518 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:35.518 Cannot find device "nvmf_init_br" 00:20:35.518 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@162 -- # true 00:20:35.518 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:35.518 Cannot find device "nvmf_init_br2" 00:20:35.518 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@163 -- # true 00:20:35.518 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:35.518 Cannot find device "nvmf_tgt_br" 00:20:35.518 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@164 -- # true 00:20:35.518 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:35.518 Cannot find device "nvmf_tgt_br2" 00:20:35.518 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@165 -- # true 00:20:35.518 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:35.518 Cannot find device "nvmf_init_br" 00:20:35.518 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@166 -- # true 00:20:35.518 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:35.518 Cannot find device "nvmf_init_br2" 00:20:35.518 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@167 -- # true 00:20:35.518 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:35.518 Cannot find device "nvmf_tgt_br" 00:20:35.518 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@168 -- # true 00:20:35.518 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:35.518 Cannot find device "nvmf_tgt_br2" 00:20:35.518 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@169 -- # true 00:20:35.518 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:35.518 Cannot find device "nvmf_br" 00:20:35.518 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@170 -- # true 00:20:35.518 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:35.518 Cannot find device "nvmf_init_if" 00:20:35.518 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@171 -- # true 00:20:35.518 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:35.518 Cannot find device "nvmf_init_if2" 00:20:35.518 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@172 -- # true 00:20:35.518 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:35.518 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:35.518 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@173 -- # true 00:20:35.518 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:35.778 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:35.778 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@174 -- # true 00:20:35.778 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:35.778 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:35.778 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:35.778 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:35.778 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:35.778 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:35.778 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:35.778 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:35.778 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:35.778 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:35.778 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:35.778 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:35.778 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:35.778 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:35.778 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:35.778 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:35.778 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:35.778 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:35.778 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:35.778 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:35.778 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:35.778 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:35.778 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:35.778 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:35.778 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:35.778 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:35.778 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:35.778 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:35.778 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:35.778 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:35.778 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:35.778 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:35.778 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:35.778 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:35.778 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:20:35.778 00:20:35.778 --- 10.0.0.3 ping statistics --- 00:20:35.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:35.778 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:20:35.778 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:36.039 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:36.039 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.075 ms 00:20:36.039 00:20:36.039 --- 10.0.0.4 ping statistics --- 00:20:36.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:36.039 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:20:36.039 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:36.039 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:36.039 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:20:36.039 00:20:36.039 --- 10.0.0.1 ping statistics --- 00:20:36.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:36.039 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:20:36.039 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:36.039 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:36.039 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:20:36.039 00:20:36.039 --- 10.0.0.2 ping statistics --- 00:20:36.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:36.039 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:20:36.039 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:36.039 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@461 -- # return 0 00:20:36.039 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:36.039 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:36.039 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:36.039 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:36.039 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:36.039 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:36.039 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:36.039 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=77428 00:20:36.039 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:36.039 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:20:36.039 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 77428 00:20:36.039 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@833 -- # '[' -z 77428 ']' 00:20:36.039 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:36.039 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:36.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:36.039 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:36.039 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:36.039 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:36.977 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:36.977 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@866 -- # return 0 00:20:36.977 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:36.977 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.977 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:36.977 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.977 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:20:36.977 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.977 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:36.977 Malloc0 00:20:36.977 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.977 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:36.977 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.977 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:36.977 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.977 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:36.977 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.977 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:36.977 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.977 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:36.977 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.977 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:36.977 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.977 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' 00:20:36.977 14:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' -N -a 00:20:37.916 Shutting down the fuzz application 00:20:37.916 14:25:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:20:38.484 Shutting down the fuzz application 00:20:38.484 14:25:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:38.484 14:25:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.484 14:25:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:38.484 14:25:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.484 14:25:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:20:38.484 14:25:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:20:38.484 14:25:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:38.484 14:25:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:20:38.484 14:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:38.484 14:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:20:38.484 14:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:38.484 14:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:38.484 rmmod nvme_tcp 00:20:38.484 rmmod nvme_fabrics 00:20:38.743 rmmod nvme_keyring 00:20:38.743 14:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:38.743 14:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:20:38.743 14:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:20:38.743 14:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@517 -- # '[' -n 77428 ']' 00:20:38.743 14:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@518 -- # killprocess 77428 00:20:38.743 14:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@952 -- # '[' -z 77428 ']' 00:20:38.743 14:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # kill -0 77428 00:20:38.743 14:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@957 -- # uname 00:20:38.743 14:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:38.744 14:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 77428 00:20:38.744 14:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:38.744 14:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:38.744 killing process with pid 77428 00:20:38.744 14:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@970 -- # echo 'killing process with pid 77428' 00:20:38.744 14:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@971 -- # kill 77428 00:20:38.744 14:25:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@976 -- # wait 77428 00:20:40.121 14:25:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:40.121 14:25:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:40.121 14:25:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:40.121 14:25:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:20:40.121 14:25:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-save 00:20:40.121 14:25:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:40.121 14:25:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-restore 00:20:40.121 14:25:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:40.121 14:25:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:40.121 14:25:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:40.121 14:25:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:40.121 14:25:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:40.121 14:25:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:40.380 14:25:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:40.380 14:25:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:40.380 14:25:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:40.380 14:25:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:40.380 14:25:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:40.380 14:25:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:40.380 14:25:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:40.380 14:25:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:40.380 14:25:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:40.380 14:25:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:40.380 14:25:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:40.380 14:25:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:40.380 14:25:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:40.380 14:25:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@300 -- # return 0 00:20:40.380 14:25:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:20:40.380 00:20:40.380 real 0m5.317s 00:20:40.380 user 0m5.268s 00:20:40.380 sys 0m1.285s 00:20:40.380 14:25:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:40.380 14:25:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:40.380 ************************************ 00:20:40.380 END TEST nvmf_fuzz 00:20:40.380 ************************************ 00:20:40.639 14:25:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:20:40.639 14:25:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:40.639 14:25:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:40.639 14:25:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:40.639 ************************************ 00:20:40.639 START TEST nvmf_multiconnection 00:20:40.639 ************************************ 00:20:40.639 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:20:40.639 * Looking for test storage... 00:20:40.639 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:40.639 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:40.639 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1691 -- # lcov --version 00:20:40.639 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:40.639 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:40.639 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:40.639 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:40.639 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:40.639 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:20:40.639 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:20:40.639 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:20:40.639 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:20:40.639 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:20:40.639 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:20:40.639 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:20:40.639 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:40.639 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:20:40.639 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:20:40.639 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:40.639 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:40.639 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:20:40.639 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:20:40.639 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:40.639 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:20:40.639 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:20:40.639 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:20:40.639 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:20:40.639 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:40.639 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:20:40.639 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:20:40.639 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:40.639 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:40.639 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:20:40.639 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:40.639 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:40.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:40.639 --rc genhtml_branch_coverage=1 00:20:40.639 --rc genhtml_function_coverage=1 00:20:40.639 --rc genhtml_legend=1 00:20:40.639 --rc geninfo_all_blocks=1 00:20:40.639 --rc geninfo_unexecuted_blocks=1 00:20:40.639 00:20:40.639 ' 00:20:40.639 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:40.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:40.639 --rc genhtml_branch_coverage=1 00:20:40.639 --rc genhtml_function_coverage=1 00:20:40.639 --rc genhtml_legend=1 00:20:40.639 --rc geninfo_all_blocks=1 00:20:40.639 --rc geninfo_unexecuted_blocks=1 00:20:40.639 00:20:40.639 ' 00:20:40.639 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:40.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:40.639 --rc genhtml_branch_coverage=1 00:20:40.639 --rc genhtml_function_coverage=1 00:20:40.639 --rc genhtml_legend=1 00:20:40.639 --rc geninfo_all_blocks=1 00:20:40.639 --rc geninfo_unexecuted_blocks=1 00:20:40.639 00:20:40.639 ' 00:20:40.639 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:40.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:40.640 --rc genhtml_branch_coverage=1 00:20:40.640 --rc genhtml_function_coverage=1 00:20:40.640 --rc genhtml_legend=1 00:20:40.640 --rc geninfo_all_blocks=1 00:20:40.640 --rc geninfo_unexecuted_blocks=1 00:20:40.640 00:20:40.640 ' 00:20:40.640 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:40.640 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:20:40.899 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:40.899 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:40.899 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:40.899 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:40.899 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:40.899 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:40.899 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:40.899 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:40.899 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:40.899 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:40.899 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:20:40.899 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:20:40.899 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:40.899 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:40.899 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:40.899 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:40.899 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:40.899 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:20:40.899 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:40.900 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:40.900 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:40.900 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.900 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.900 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.900 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:20:40.900 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.900 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:20:40.900 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:40.900 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:40.900 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:40.900 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:40.900 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:40.900 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:40.900 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:40.900 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:40.900 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:40.900 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:40.900 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:40.900 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:40.900 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:20:40.900 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:20:40.900 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:40.900 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:40.900 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:40.900 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:40.900 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:40.900 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:40.900 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:40.900 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:40.900 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:40.900 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:40.900 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:40.900 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:40.900 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:40.900 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:40.900 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:40.900 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:40.900 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:40.900 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:40.900 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:40.900 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:40.900 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:40.900 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:40.900 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:40.900 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:40.900 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:40.900 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:40.900 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:40.900 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:40.900 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:40.900 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:40.900 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:40.900 Cannot find device "nvmf_init_br" 00:20:40.900 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@162 -- # true 00:20:40.900 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:40.900 Cannot find device "nvmf_init_br2" 00:20:40.900 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@163 -- # true 00:20:40.900 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:40.900 Cannot find device "nvmf_tgt_br" 00:20:40.900 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@164 -- # true 00:20:40.900 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:40.900 Cannot find device "nvmf_tgt_br2" 00:20:40.900 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@165 -- # true 00:20:40.900 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:40.900 Cannot find device "nvmf_init_br" 00:20:40.900 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@166 -- # true 00:20:40.900 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:40.900 Cannot find device "nvmf_init_br2" 00:20:40.900 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@167 -- # true 00:20:40.900 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:40.900 Cannot find device "nvmf_tgt_br" 00:20:40.900 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@168 -- # true 00:20:40.900 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:40.900 Cannot find device "nvmf_tgt_br2" 00:20:40.900 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@169 -- # true 00:20:40.900 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:40.900 Cannot find device "nvmf_br" 00:20:40.900 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@170 -- # true 00:20:40.900 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:40.900 Cannot find device "nvmf_init_if" 00:20:40.900 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@171 -- # true 00:20:40.900 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:40.900 Cannot find device "nvmf_init_if2" 00:20:40.900 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@172 -- # true 00:20:40.900 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:40.900 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:40.900 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@173 -- # true 00:20:40.900 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:40.900 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:40.900 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@174 -- # true 00:20:40.900 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:41.160 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:41.160 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:41.160 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:41.160 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:41.160 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:41.160 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:41.160 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:41.160 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:41.160 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:41.160 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:41.160 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:41.160 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:41.160 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:41.160 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:41.160 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:41.160 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:41.160 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:41.160 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:41.160 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:41.160 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:41.160 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:41.160 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:41.160 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:41.160 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:41.160 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:41.160 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:41.160 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:41.160 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:41.160 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:41.160 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:41.160 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:41.160 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:41.160 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:41.160 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:20:41.160 00:20:41.160 --- 10.0.0.3 ping statistics --- 00:20:41.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:41.160 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:20:41.160 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:41.160 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:41.160 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.070 ms 00:20:41.160 00:20:41.160 --- 10.0.0.4 ping statistics --- 00:20:41.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:41.160 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:20:41.160 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:41.421 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:41.421 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:20:41.421 00:20:41.421 --- 10.0.0.1 ping statistics --- 00:20:41.421 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:41.421 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:20:41.421 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:41.421 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:41.421 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:20:41.421 00:20:41.421 --- 10.0.0.2 ping statistics --- 00:20:41.421 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:41.421 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:20:41.421 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:41.421 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@461 -- # return 0 00:20:41.421 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:41.421 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:41.421 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:41.421 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:41.421 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:41.421 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:41.421 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:41.421 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:20:41.421 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:41.421 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:41.421 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:41.421 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@509 -- # nvmfpid=77701 00:20:41.421 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@510 -- # waitforlisten 77701 00:20:41.421 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@833 -- # '[' -z 77701 ']' 00:20:41.421 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:41.421 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:41.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:41.421 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:41.421 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:41.421 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:41.421 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:41.421 [2024-11-06 14:25:08.949071] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:20:41.421 [2024-11-06 14:25:08.949200] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:41.680 [2024-11-06 14:25:09.133177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:41.680 [2024-11-06 14:25:09.285012] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:41.680 [2024-11-06 14:25:09.285071] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:41.680 [2024-11-06 14:25:09.285088] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:41.680 [2024-11-06 14:25:09.285099] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:41.680 [2024-11-06 14:25:09.285111] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:41.680 [2024-11-06 14:25:09.287548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:41.680 [2024-11-06 14:25:09.287731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:41.680 [2024-11-06 14:25:09.287807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:41.680 [2024-11-06 14:25:09.287819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:41.955 [2024-11-06 14:25:09.540553] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:42.212 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:42.213 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@866 -- # return 0 00:20:42.213 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:42.213 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:42.213 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:42.471 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:42.471 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:42.471 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.471 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:42.471 [2024-11-06 14:25:09.857925] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:42.471 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.471 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:20:42.471 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:42.471 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:42.471 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.471 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:42.471 Malloc1 00:20:42.471 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.471 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:20:42.471 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.471 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:42.471 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.471 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:42.471 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.471 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:42.471 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.471 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:42.472 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.472 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:42.472 [2024-11-06 14:25:10.004623] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:42.472 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.472 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:42.472 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:20:42.472 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.472 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:42.472 Malloc2 00:20:42.472 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.472 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:20:42.472 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.472 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:42.731 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.731 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:20:42.731 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.731 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:42.731 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.731 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:20:42.731 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.731 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:42.731 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.731 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:42.731 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:20:42.731 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.731 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:42.731 Malloc3 00:20:42.731 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.731 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:20:42.731 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.731 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:42.731 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.731 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:20:42.731 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.731 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:42.731 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.731 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.3 -s 4420 00:20:42.731 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.731 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:42.731 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.731 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:42.731 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:20:42.731 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.731 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:42.731 Malloc4 00:20:42.731 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.731 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:20:42.731 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.731 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:42.731 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.731 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:20:42.731 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.731 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:42.991 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.991 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.3 -s 4420 00:20:42.991 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.991 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:42.991 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.991 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:42.991 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:20:42.991 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.991 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:42.991 Malloc5 00:20:42.991 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.991 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:20:42.991 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.991 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:42.991 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.991 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:20:42.991 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.991 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:42.991 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.991 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.3 -s 4420 00:20:42.991 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.991 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:42.991 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.991 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:42.991 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:20:42.991 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.991 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:42.991 Malloc6 00:20:42.991 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.991 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:20:42.991 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.991 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:42.991 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.991 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:20:42.991 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.992 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:42.992 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.992 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.3 -s 4420 00:20:42.992 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.992 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:43.251 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.251 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:43.251 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:20:43.251 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.251 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:43.251 Malloc7 00:20:43.251 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.251 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:20:43.251 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.251 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:43.251 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.251 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:20:43.251 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.251 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:43.251 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.251 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.3 -s 4420 00:20:43.251 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.251 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:43.251 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.251 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:43.251 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:20:43.251 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.251 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:43.251 Malloc8 00:20:43.251 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.251 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:20:43.251 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.251 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:43.251 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.251 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:20:43.251 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.251 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:43.251 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.251 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.3 -s 4420 00:20:43.251 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.251 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:43.511 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.511 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:43.511 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:20:43.511 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.511 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:43.511 Malloc9 00:20:43.511 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.511 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:20:43.511 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.511 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:43.511 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.511 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:20:43.511 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.511 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:43.511 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.511 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.3 -s 4420 00:20:43.511 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.511 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:43.511 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.511 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:43.511 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:20:43.511 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.511 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:43.511 Malloc10 00:20:43.511 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.511 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:20:43.511 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.511 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:43.511 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.511 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:20:43.511 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.511 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:43.511 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.511 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.3 -s 4420 00:20:43.511 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.511 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:43.511 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.511 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:43.511 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:20:43.511 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.511 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:43.770 Malloc11 00:20:43.770 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.770 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:20:43.770 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.770 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:43.770 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.770 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:20:43.770 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.770 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:43.770 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.770 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.3 -s 4420 00:20:43.770 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.770 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:43.770 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.770 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:20:43.770 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:43.770 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid=406d54d0-5e94-472a-a2b3-4291f3ac81e0 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:20:44.029 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:20:44.029 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:20:44.029 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:20:44.029 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:20:44.029 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:20:45.956 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:20:45.956 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:20:45.956 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK1 00:20:45.956 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:20:45.956 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:20:45.956 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:20:45.956 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:45.956 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid=406d54d0-5e94-472a-a2b3-4291f3ac81e0 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.3 -s 4420 00:20:45.956 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:20:45.956 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:20:45.956 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:20:45.956 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:20:45.956 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:20:48.491 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:20:48.491 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:20:48.491 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK2 00:20:48.491 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:20:48.491 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:20:48.491 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:20:48.491 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:48.491 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid=406d54d0-5e94-472a-a2b3-4291f3ac81e0 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.3 -s 4420 00:20:48.491 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:20:48.491 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:20:48.491 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:20:48.491 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:20:48.491 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:20:50.398 14:25:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:20:50.398 14:25:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:20:50.398 14:25:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK3 00:20:50.398 14:25:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:20:50.398 14:25:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:20:50.398 14:25:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:20:50.398 14:25:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:50.398 14:25:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid=406d54d0-5e94-472a-a2b3-4291f3ac81e0 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.3 -s 4420 00:20:50.398 14:25:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:20:50.398 14:25:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:20:50.398 14:25:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:20:50.398 14:25:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:20:50.398 14:25:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:20:52.931 14:25:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:20:52.931 14:25:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:20:52.931 14:25:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK4 00:20:52.931 14:25:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:20:52.931 14:25:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:20:52.931 14:25:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:20:52.931 14:25:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:52.931 14:25:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid=406d54d0-5e94-472a-a2b3-4291f3ac81e0 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.3 -s 4420 00:20:52.931 14:25:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:20:52.931 14:25:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:20:52.931 14:25:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:20:52.931 14:25:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:20:52.931 14:25:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:20:54.836 14:25:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:20:54.836 14:25:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:20:54.836 14:25:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK5 00:20:54.836 14:25:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:20:54.836 14:25:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:20:54.836 14:25:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:20:54.836 14:25:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:54.836 14:25:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid=406d54d0-5e94-472a-a2b3-4291f3ac81e0 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.3 -s 4420 00:20:54.836 14:25:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:20:54.836 14:25:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:20:54.836 14:25:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:20:54.836 14:25:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:20:54.836 14:25:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:20:56.742 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:20:56.742 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:20:56.742 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK6 00:20:56.742 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:20:56.742 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:20:56.742 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:20:56.742 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:56.742 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid=406d54d0-5e94-472a-a2b3-4291f3ac81e0 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.3 -s 4420 00:20:57.001 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:20:57.001 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:20:57.001 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:20:57.001 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:20:57.001 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:20:58.951 14:25:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:20:58.951 14:25:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK7 00:20:58.951 14:25:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:20:58.951 14:25:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:20:58.951 14:25:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:20:58.951 14:25:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:20:58.951 14:25:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:58.951 14:25:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid=406d54d0-5e94-472a-a2b3-4291f3ac81e0 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.3 -s 4420 00:20:59.209 14:25:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:20:59.209 14:25:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:20:59.209 14:25:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:20:59.209 14:25:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:20:59.209 14:25:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:21:01.142 14:25:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:21:01.142 14:25:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:21:01.142 14:25:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK8 00:21:01.142 14:25:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:21:01.142 14:25:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:21:01.142 14:25:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:21:01.142 14:25:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:01.142 14:25:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid=406d54d0-5e94-472a-a2b3-4291f3ac81e0 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.3 -s 4420 00:21:01.400 14:25:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:21:01.400 14:25:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:21:01.400 14:25:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:21:01.400 14:25:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:21:01.400 14:25:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:21:03.303 14:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:21:03.303 14:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:21:03.303 14:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK9 00:21:03.303 14:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:21:03.303 14:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:21:03.303 14:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:21:03.303 14:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:03.303 14:25:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid=406d54d0-5e94-472a-a2b3-4291f3ac81e0 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.3 -s 4420 00:21:03.562 14:25:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:21:03.562 14:25:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:21:03.562 14:25:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:21:03.562 14:25:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:21:03.562 14:25:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:21:05.470 14:25:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:21:05.470 14:25:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK10 00:21:05.470 14:25:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:21:05.470 14:25:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:21:05.470 14:25:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:21:05.470 14:25:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:21:05.470 14:25:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:05.470 14:25:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid=406d54d0-5e94-472a-a2b3-4291f3ac81e0 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.3 -s 4420 00:21:05.728 14:25:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:21:05.728 14:25:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:21:05.728 14:25:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:21:05.728 14:25:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:21:05.728 14:25:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:21:08.264 14:25:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:21:08.264 14:25:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:21:08.264 14:25:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK11 00:21:08.264 14:25:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:21:08.264 14:25:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:21:08.264 14:25:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:21:08.264 14:25:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:21:08.264 [global] 00:21:08.264 thread=1 00:21:08.264 invalidate=1 00:21:08.264 rw=read 00:21:08.264 time_based=1 00:21:08.264 runtime=10 00:21:08.264 ioengine=libaio 00:21:08.264 direct=1 00:21:08.264 bs=262144 00:21:08.264 iodepth=64 00:21:08.264 norandommap=1 00:21:08.264 numjobs=1 00:21:08.264 00:21:08.264 [job0] 00:21:08.264 filename=/dev/nvme0n1 00:21:08.264 [job1] 00:21:08.264 filename=/dev/nvme10n1 00:21:08.264 [job2] 00:21:08.264 filename=/dev/nvme1n1 00:21:08.264 [job3] 00:21:08.264 filename=/dev/nvme2n1 00:21:08.264 [job4] 00:21:08.264 filename=/dev/nvme3n1 00:21:08.264 [job5] 00:21:08.264 filename=/dev/nvme4n1 00:21:08.264 [job6] 00:21:08.264 filename=/dev/nvme5n1 00:21:08.264 [job7] 00:21:08.264 filename=/dev/nvme6n1 00:21:08.264 [job8] 00:21:08.264 filename=/dev/nvme7n1 00:21:08.264 [job9] 00:21:08.264 filename=/dev/nvme8n1 00:21:08.264 [job10] 00:21:08.264 filename=/dev/nvme9n1 00:21:08.264 Could not set queue depth (nvme0n1) 00:21:08.264 Could not set queue depth (nvme10n1) 00:21:08.264 Could not set queue depth (nvme1n1) 00:21:08.264 Could not set queue depth (nvme2n1) 00:21:08.264 Could not set queue depth (nvme3n1) 00:21:08.264 Could not set queue depth (nvme4n1) 00:21:08.264 Could not set queue depth (nvme5n1) 00:21:08.264 Could not set queue depth (nvme6n1) 00:21:08.264 Could not set queue depth (nvme7n1) 00:21:08.264 Could not set queue depth (nvme8n1) 00:21:08.264 Could not set queue depth (nvme9n1) 00:21:08.264 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:08.264 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:08.264 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:08.264 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:08.264 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:08.264 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:08.264 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:08.264 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:08.264 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:08.264 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:08.264 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:08.264 fio-3.35 00:21:08.264 Starting 11 threads 00:21:20.473 00:21:20.473 job0: (groupid=0, jobs=1): err= 0: pid=78168: Wed Nov 6 14:25:46 2024 00:21:20.473 read: IOPS=100, BW=25.2MiB/s (26.5MB/s)(257MiB/10198msec) 00:21:20.473 slat (usec): min=37, max=267475, avg=9761.47, stdev=28195.76 00:21:20.473 clat (msec): min=26, max=956, avg=623.31, stdev=191.84 00:21:20.473 lat (msec): min=27, max=994, avg=633.07, stdev=193.57 00:21:20.473 clat percentiles (msec): 00:21:20.473 | 1.00th=[ 163], 5.00th=[ 201], 10.00th=[ 296], 20.00th=[ 464], 00:21:20.473 | 30.00th=[ 567], 40.00th=[ 642], 50.00th=[ 676], 60.00th=[ 709], 00:21:20.473 | 70.00th=[ 735], 80.00th=[ 776], 90.00th=[ 818], 95.00th=[ 852], 00:21:20.473 | 99.00th=[ 927], 99.50th=[ 944], 99.90th=[ 961], 99.95th=[ 961], 00:21:20.473 | 99.99th=[ 961] 00:21:20.473 bw ( KiB/s): min=12800, max=39936, per=4.23%, avg=24682.95, stdev=7344.10, samples=20 00:21:20.473 iops : min= 50, max= 156, avg=96.25, stdev=28.63, samples=20 00:21:20.473 lat (msec) : 50=0.10%, 250=8.07%, 500=14.48%, 750=51.31%, 1000=26.04% 00:21:20.473 cpu : usr=0.07%, sys=0.69%, ctx=184, majf=0, minf=4097 00:21:20.473 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.1%, >=64=93.9% 00:21:20.473 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.473 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:20.473 issued rwts: total=1029,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:20.473 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:20.473 job1: (groupid=0, jobs=1): err= 0: pid=78172: Wed Nov 6 14:25:46 2024 00:21:20.473 read: IOPS=507, BW=127MiB/s (133MB/s)(1278MiB/10061msec) 00:21:20.473 slat (usec): min=15, max=100949, avg=1949.91, stdev=5109.21 00:21:20.473 clat (msec): min=29, max=392, avg=123.70, stdev=43.10 00:21:20.473 lat (msec): min=30, max=392, avg=125.65, stdev=43.67 00:21:20.473 clat percentiles (msec): 00:21:20.473 | 1.00th=[ 93], 5.00th=[ 101], 10.00th=[ 104], 20.00th=[ 107], 00:21:20.473 | 30.00th=[ 110], 40.00th=[ 112], 50.00th=[ 113], 60.00th=[ 115], 00:21:20.473 | 70.00th=[ 118], 80.00th=[ 123], 90.00th=[ 142], 95.00th=[ 199], 00:21:20.473 | 99.00th=[ 359], 99.50th=[ 372], 99.90th=[ 388], 99.95th=[ 388], 00:21:20.473 | 99.99th=[ 393] 00:21:20.473 bw ( KiB/s): min=44120, max=150016, per=22.12%, avg=129126.55, stdev=31071.43, samples=20 00:21:20.473 iops : min= 172, max= 586, avg=504.25, stdev=121.38, samples=20 00:21:20.473 lat (msec) : 50=0.37%, 100=4.72%, 250=91.43%, 500=3.48% 00:21:20.473 cpu : usr=0.32%, sys=2.84%, ctx=1056, majf=0, minf=4097 00:21:20.473 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:21:20.473 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.473 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:20.473 issued rwts: total=5110,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:20.473 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:20.473 job2: (groupid=0, jobs=1): err= 0: pid=78173: Wed Nov 6 14:25:46 2024 00:21:20.473 read: IOPS=217, BW=54.3MiB/s (57.0MB/s)(549MiB/10095msec) 00:21:20.473 slat (usec): min=16, max=261690, avg=4551.39, stdev=13098.15 00:21:20.473 clat (msec): min=54, max=686, avg=289.42, stdev=102.20 00:21:20.473 lat (msec): min=54, max=707, avg=293.97, stdev=102.97 00:21:20.473 clat percentiles (msec): 00:21:20.473 | 1.00th=[ 153], 5.00th=[ 222], 10.00th=[ 232], 20.00th=[ 241], 00:21:20.473 | 30.00th=[ 249], 40.00th=[ 253], 50.00th=[ 259], 60.00th=[ 266], 00:21:20.473 | 70.00th=[ 271], 80.00th=[ 284], 90.00th=[ 435], 95.00th=[ 584], 00:21:20.473 | 99.00th=[ 676], 99.50th=[ 684], 99.90th=[ 684], 99.95th=[ 684], 00:21:20.473 | 99.99th=[ 684] 00:21:20.473 bw ( KiB/s): min=14336, max=68096, per=9.35%, avg=54578.45, stdev=16481.36, samples=20 00:21:20.473 iops : min= 56, max= 266, avg=212.85, stdev=64.36, samples=20 00:21:20.473 lat (msec) : 100=0.64%, 250=32.73%, 500=60.03%, 750=6.61% 00:21:20.473 cpu : usr=0.17%, sys=1.26%, ctx=428, majf=0, minf=4097 00:21:20.473 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.5%, >=64=97.1% 00:21:20.473 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.473 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:20.473 issued rwts: total=2194,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:20.473 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:20.473 job3: (groupid=0, jobs=1): err= 0: pid=78174: Wed Nov 6 14:25:46 2024 00:21:20.473 read: IOPS=104, BW=26.2MiB/s (27.5MB/s)(268MiB/10197msec) 00:21:20.473 slat (usec): min=15, max=197636, avg=8834.36, stdev=26306.53 00:21:20.473 clat (msec): min=36, max=949, avg=599.35, stdev=195.46 00:21:20.473 lat (msec): min=38, max=950, avg=608.18, stdev=198.39 00:21:20.473 clat percentiles (msec): 00:21:20.473 | 1.00th=[ 78], 5.00th=[ 251], 10.00th=[ 296], 20.00th=[ 414], 00:21:20.473 | 30.00th=[ 518], 40.00th=[ 609], 50.00th=[ 651], 60.00th=[ 693], 00:21:20.473 | 70.00th=[ 735], 80.00th=[ 768], 90.00th=[ 810], 95.00th=[ 852], 00:21:20.473 | 99.00th=[ 885], 99.50th=[ 894], 99.90th=[ 953], 99.95th=[ 953], 00:21:20.473 | 99.99th=[ 953] 00:21:20.473 bw ( KiB/s): min=16896, max=54272, per=4.41%, avg=25747.15, stdev=8462.24, samples=20 00:21:20.473 iops : min= 66, max= 212, avg=100.45, stdev=33.13, samples=20 00:21:20.473 lat (msec) : 50=0.09%, 100=1.78%, 250=3.27%, 500=24.58%, 750=46.64% 00:21:20.473 lat (msec) : 1000=23.64% 00:21:20.473 cpu : usr=0.07%, sys=0.63%, ctx=189, majf=0, minf=4097 00:21:20.473 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.5%, 32=3.0%, >=64=94.1% 00:21:20.473 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.473 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:20.473 issued rwts: total=1070,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:20.473 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:20.473 job4: (groupid=0, jobs=1): err= 0: pid=78175: Wed Nov 6 14:25:46 2024 00:21:20.473 read: IOPS=109, BW=27.4MiB/s (28.7MB/s)(279MiB/10199msec) 00:21:20.473 slat (usec): min=17, max=263005, avg=8660.68, stdev=27805.02 00:21:20.473 clat (msec): min=27, max=1068, avg=574.56, stdev=257.78 00:21:20.473 lat (msec): min=28, max=1068, avg=583.22, stdev=261.16 00:21:20.473 clat percentiles (msec): 00:21:20.473 | 1.00th=[ 43], 5.00th=[ 136], 10.00th=[ 243], 20.00th=[ 300], 00:21:20.473 | 30.00th=[ 342], 40.00th=[ 542], 50.00th=[ 617], 60.00th=[ 676], 00:21:20.473 | 70.00th=[ 776], 80.00th=[ 835], 90.00th=[ 894], 95.00th=[ 919], 00:21:20.473 | 99.00th=[ 978], 99.50th=[ 1003], 99.90th=[ 1070], 99.95th=[ 1070], 00:21:20.473 | 99.99th=[ 1070] 00:21:20.473 bw ( KiB/s): min=11264, max=64512, per=4.61%, avg=26930.35, stdev=14362.08, samples=20 00:21:20.473 iops : min= 44, max= 252, avg=105.05, stdev=56.06, samples=20 00:21:20.473 lat (msec) : 50=1.16%, 100=2.33%, 250=7.79%, 500=24.80%, 750=31.33% 00:21:20.473 lat (msec) : 1000=32.14%, 2000=0.45% 00:21:20.473 cpu : usr=0.08%, sys=0.70%, ctx=219, majf=0, minf=4097 00:21:20.473 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.4%, 32=2.9%, >=64=94.4% 00:21:20.473 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.473 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:20.473 issued rwts: total=1117,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:20.473 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:20.473 job5: (groupid=0, jobs=1): err= 0: pid=78176: Wed Nov 6 14:25:46 2024 00:21:20.473 read: IOPS=94, BW=23.6MiB/s (24.7MB/s)(240MiB/10187msec) 00:21:20.473 slat (usec): min=16, max=427305, avg=10424.66, stdev=32642.03 00:21:20.473 clat (msec): min=172, max=934, avg=666.88, stdev=147.84 00:21:20.473 lat (msec): min=242, max=934, avg=677.30, stdev=147.89 00:21:20.473 clat percentiles (msec): 00:21:20.473 | 1.00th=[ 275], 5.00th=[ 401], 10.00th=[ 460], 20.00th=[ 493], 00:21:20.473 | 30.00th=[ 617], 40.00th=[ 651], 50.00th=[ 693], 60.00th=[ 726], 00:21:20.473 | 70.00th=[ 768], 80.00th=[ 802], 90.00th=[ 844], 95.00th=[ 852], 00:21:20.473 | 99.00th=[ 885], 99.50th=[ 902], 99.90th=[ 936], 99.95th=[ 936], 00:21:20.473 | 99.99th=[ 936] 00:21:20.473 bw ( KiB/s): min= 9216, max=45568, per=3.93%, avg=22941.10, stdev=9183.43, samples=20 00:21:20.473 iops : min= 36, max= 178, avg=89.60, stdev=35.88, samples=20 00:21:20.473 lat (msec) : 250=0.42%, 500=19.79%, 750=45.83%, 1000=33.96% 00:21:20.473 cpu : usr=0.05%, sys=0.59%, ctx=188, majf=0, minf=4097 00:21:20.473 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.7%, 32=3.3%, >=64=93.4% 00:21:20.473 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.473 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:20.473 issued rwts: total=960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:20.473 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:20.473 job6: (groupid=0, jobs=1): err= 0: pid=78177: Wed Nov 6 14:25:46 2024 00:21:20.473 read: IOPS=222, BW=55.6MiB/s (58.3MB/s)(562MiB/10108msec) 00:21:20.473 slat (usec): min=16, max=243764, avg=4463.79, stdev=12803.85 00:21:20.473 clat (msec): min=31, max=668, avg=282.75, stdev=91.44 00:21:20.474 lat (msec): min=33, max=718, avg=287.22, stdev=92.47 00:21:20.474 clat percentiles (msec): 00:21:20.474 | 1.00th=[ 114], 5.00th=[ 224], 10.00th=[ 234], 20.00th=[ 243], 00:21:20.474 | 30.00th=[ 249], 40.00th=[ 255], 50.00th=[ 259], 60.00th=[ 262], 00:21:20.474 | 70.00th=[ 271], 80.00th=[ 279], 90.00th=[ 384], 95.00th=[ 542], 00:21:20.474 | 99.00th=[ 667], 99.50th=[ 667], 99.90th=[ 667], 99.95th=[ 667], 00:21:20.474 | 99.99th=[ 667] 00:21:20.474 bw ( KiB/s): min=23504, max=67584, per=9.57%, avg=55848.90, stdev=13655.75, samples=20 00:21:20.474 iops : min= 91, max= 264, avg=218.00, stdev=53.48, samples=20 00:21:20.474 lat (msec) : 50=0.31%, 100=0.04%, 250=32.35%, 500=61.33%, 750=5.96% 00:21:20.474 cpu : usr=0.11%, sys=1.25%, ctx=452, majf=0, minf=4097 00:21:20.474 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.4%, >=64=97.2% 00:21:20.474 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.474 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:20.474 issued rwts: total=2247,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:20.474 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:20.474 job7: (groupid=0, jobs=1): err= 0: pid=78178: Wed Nov 6 14:25:46 2024 00:21:20.474 read: IOPS=219, BW=54.9MiB/s (57.6MB/s)(555MiB/10104msec) 00:21:20.474 slat (usec): min=17, max=154050, avg=4502.67, stdev=12272.13 00:21:20.474 clat (msec): min=36, max=668, avg=286.01, stdev=86.47 00:21:20.474 lat (msec): min=38, max=684, avg=290.51, stdev=87.47 00:21:20.474 clat percentiles (msec): 00:21:20.474 | 1.00th=[ 146], 5.00th=[ 220], 10.00th=[ 232], 20.00th=[ 243], 00:21:20.474 | 30.00th=[ 249], 40.00th=[ 255], 50.00th=[ 259], 60.00th=[ 266], 00:21:20.474 | 70.00th=[ 271], 80.00th=[ 292], 90.00th=[ 405], 95.00th=[ 510], 00:21:20.474 | 99.00th=[ 617], 99.50th=[ 617], 99.90th=[ 667], 99.95th=[ 667], 00:21:20.474 | 99.99th=[ 667] 00:21:20.474 bw ( KiB/s): min=16929, max=67584, per=9.45%, avg=55188.20, stdev=14125.55, samples=20 00:21:20.474 iops : min= 66, max= 264, avg=215.45, stdev=55.22, samples=20 00:21:20.474 lat (msec) : 50=0.14%, 100=0.05%, 250=31.58%, 500=62.52%, 750=5.72% 00:21:20.474 cpu : usr=0.17%, sys=1.22%, ctx=467, majf=0, minf=4097 00:21:20.474 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.4%, >=64=97.2% 00:21:20.474 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.474 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:20.474 issued rwts: total=2220,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:20.474 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:20.474 job8: (groupid=0, jobs=1): err= 0: pid=78179: Wed Nov 6 14:25:46 2024 00:21:20.474 read: IOPS=109, BW=27.3MiB/s (28.6MB/s)(278MiB/10188msec) 00:21:20.474 slat (usec): min=16, max=216271, avg=9002.28, stdev=26049.21 00:21:20.474 clat (msec): min=77, max=908, avg=576.27, stdev=211.76 00:21:20.474 lat (msec): min=77, max=968, avg=585.27, stdev=214.55 00:21:20.474 clat percentiles (msec): 00:21:20.474 | 1.00th=[ 92], 5.00th=[ 232], 10.00th=[ 275], 20.00th=[ 355], 00:21:20.474 | 30.00th=[ 447], 40.00th=[ 542], 50.00th=[ 625], 60.00th=[ 676], 00:21:20.474 | 70.00th=[ 726], 80.00th=[ 793], 90.00th=[ 818], 95.00th=[ 852], 00:21:20.474 | 99.00th=[ 894], 99.50th=[ 911], 99.90th=[ 911], 99.95th=[ 911], 00:21:20.474 | 99.99th=[ 911] 00:21:20.474 bw ( KiB/s): min=17373, max=56432, per=4.60%, avg=26828.30, stdev=10513.83, samples=20 00:21:20.474 iops : min= 67, max= 220, avg=104.65, stdev=41.09, samples=20 00:21:20.474 lat (msec) : 100=1.53%, 250=5.31%, 500=30.40%, 750=36.69%, 1000=26.08% 00:21:20.474 cpu : usr=0.05%, sys=0.73%, ctx=208, majf=0, minf=4097 00:21:20.474 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.4%, 32=2.9%, >=64=94.3% 00:21:20.474 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.474 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:20.474 issued rwts: total=1112,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:20.474 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:20.474 job9: (groupid=0, jobs=1): err= 0: pid=78180: Wed Nov 6 14:25:46 2024 00:21:20.474 read: IOPS=104, BW=26.1MiB/s (27.3MB/s)(266MiB/10202msec) 00:21:20.474 slat (usec): min=14, max=251418, avg=8874.25, stdev=25398.16 00:21:20.474 clat (msec): min=40, max=877, avg=602.96, stdev=168.77 00:21:20.474 lat (msec): min=41, max=877, avg=611.83, stdev=171.10 00:21:20.474 clat percentiles (msec): 00:21:20.474 | 1.00th=[ 142], 5.00th=[ 321], 10.00th=[ 380], 20.00th=[ 439], 00:21:20.474 | 30.00th=[ 493], 40.00th=[ 600], 50.00th=[ 659], 60.00th=[ 693], 00:21:20.474 | 70.00th=[ 726], 80.00th=[ 751], 90.00th=[ 785], 95.00th=[ 818], 00:21:20.474 | 99.00th=[ 844], 99.50th=[ 844], 99.90th=[ 860], 99.95th=[ 877], 00:21:20.474 | 99.99th=[ 877] 00:21:20.474 bw ( KiB/s): min=15360, max=39936, per=4.38%, avg=25579.40, stdev=6519.22, samples=20 00:21:20.474 iops : min= 60, max= 156, avg=99.75, stdev=25.48, samples=20 00:21:20.474 lat (msec) : 50=0.09%, 100=0.75%, 250=2.54%, 500=26.97%, 750=48.40% 00:21:20.474 lat (msec) : 1000=21.24% 00:21:20.474 cpu : usr=0.06%, sys=0.70%, ctx=217, majf=0, minf=4097 00:21:20.474 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.5%, 32=3.0%, >=64=94.1% 00:21:20.474 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.474 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:20.474 issued rwts: total=1064,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:20.474 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:20.474 job10: (groupid=0, jobs=1): err= 0: pid=78181: Wed Nov 6 14:25:46 2024 00:21:20.474 read: IOPS=511, BW=128MiB/s (134MB/s)(1285MiB/10056msec) 00:21:20.474 slat (usec): min=19, max=78206, avg=1938.94, stdev=4837.65 00:21:20.474 clat (msec): min=31, max=375, avg=122.97, stdev=38.99 00:21:20.474 lat (msec): min=32, max=375, avg=124.91, stdev=39.55 00:21:20.474 clat percentiles (msec): 00:21:20.474 | 1.00th=[ 92], 5.00th=[ 101], 10.00th=[ 104], 20.00th=[ 107], 00:21:20.474 | 30.00th=[ 109], 40.00th=[ 111], 50.00th=[ 113], 60.00th=[ 115], 00:21:20.474 | 70.00th=[ 118], 80.00th=[ 123], 90.00th=[ 146], 95.00th=[ 197], 00:21:20.474 | 99.00th=[ 313], 99.50th=[ 330], 99.90th=[ 342], 99.95th=[ 355], 00:21:20.474 | 99.99th=[ 376] 00:21:20.474 bw ( KiB/s): min=53248, max=148992, per=22.27%, avg=129985.95, stdev=30119.54, samples=20 00:21:20.474 iops : min= 208, max= 582, avg=507.75, stdev=117.65, samples=20 00:21:20.474 lat (msec) : 50=0.08%, 100=4.77%, 250=91.52%, 500=3.64% 00:21:20.474 cpu : usr=0.37%, sys=2.72%, ctx=1061, majf=0, minf=4097 00:21:20.474 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:21:20.474 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.474 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:20.474 issued rwts: total=5140,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:20.474 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:20.474 00:21:20.474 Run status group 0 (all jobs): 00:21:20.474 READ: bw=570MiB/s (598MB/s), 23.6MiB/s-128MiB/s (24.7MB/s-134MB/s), io=5816MiB (6098MB), run=10056-10202msec 00:21:20.474 00:21:20.474 Disk stats (read/write): 00:21:20.474 nvme0n1: ios=1930/0, merge=0/0, ticks=1199810/0, in_queue=1199810, util=97.89% 00:21:20.474 nvme10n1: ios=10116/0, merge=0/0, ticks=1236245/0, in_queue=1236245, util=97.90% 00:21:20.474 nvme1n1: ios=4264/0, merge=0/0, ticks=1228544/0, in_queue=1228544, util=98.13% 00:21:20.474 nvme2n1: ios=2012/0, merge=0/0, ticks=1203300/0, in_queue=1203300, util=98.28% 00:21:20.474 nvme3n1: ios=2107/0, merge=0/0, ticks=1207534/0, in_queue=1207534, util=98.44% 00:21:20.474 nvme4n1: ios=1793/0, merge=0/0, ticks=1188750/0, in_queue=1188750, util=98.40% 00:21:20.474 nvme5n1: ios=4366/0, merge=0/0, ticks=1229605/0, in_queue=1229605, util=98.62% 00:21:20.474 nvme6n1: ios=4322/0, merge=0/0, ticks=1229853/0, in_queue=1229853, util=98.64% 00:21:20.474 nvme7n1: ios=2100/0, merge=0/0, ticks=1192747/0, in_queue=1192747, util=99.00% 00:21:20.474 nvme8n1: ios=2000/0, merge=0/0, ticks=1203800/0, in_queue=1203800, util=99.00% 00:21:20.474 nvme9n1: ios=10187/0, merge=0/0, ticks=1236452/0, in_queue=1236452, util=99.04% 00:21:20.474 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:21:20.474 [global] 00:21:20.474 thread=1 00:21:20.474 invalidate=1 00:21:20.474 rw=randwrite 00:21:20.474 time_based=1 00:21:20.474 runtime=10 00:21:20.474 ioengine=libaio 00:21:20.474 direct=1 00:21:20.474 bs=262144 00:21:20.474 iodepth=64 00:21:20.474 norandommap=1 00:21:20.474 numjobs=1 00:21:20.474 00:21:20.474 [job0] 00:21:20.474 filename=/dev/nvme0n1 00:21:20.474 [job1] 00:21:20.474 filename=/dev/nvme10n1 00:21:20.474 [job2] 00:21:20.474 filename=/dev/nvme1n1 00:21:20.474 [job3] 00:21:20.474 filename=/dev/nvme2n1 00:21:20.474 [job4] 00:21:20.474 filename=/dev/nvme3n1 00:21:20.474 [job5] 00:21:20.474 filename=/dev/nvme4n1 00:21:20.474 [job6] 00:21:20.474 filename=/dev/nvme5n1 00:21:20.474 [job7] 00:21:20.474 filename=/dev/nvme6n1 00:21:20.474 [job8] 00:21:20.474 filename=/dev/nvme7n1 00:21:20.474 [job9] 00:21:20.474 filename=/dev/nvme8n1 00:21:20.474 [job10] 00:21:20.474 filename=/dev/nvme9n1 00:21:20.474 Could not set queue depth (nvme0n1) 00:21:20.474 Could not set queue depth (nvme10n1) 00:21:20.474 Could not set queue depth (nvme1n1) 00:21:20.474 Could not set queue depth (nvme2n1) 00:21:20.474 Could not set queue depth (nvme3n1) 00:21:20.474 Could not set queue depth (nvme4n1) 00:21:20.474 Could not set queue depth (nvme5n1) 00:21:20.474 Could not set queue depth (nvme6n1) 00:21:20.474 Could not set queue depth (nvme7n1) 00:21:20.474 Could not set queue depth (nvme8n1) 00:21:20.474 Could not set queue depth (nvme9n1) 00:21:20.474 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:20.474 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:20.474 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:20.474 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:20.474 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:20.474 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:20.474 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:20.474 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:20.474 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:20.474 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:20.475 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:20.475 fio-3.35 00:21:20.475 Starting 11 threads 00:21:30.463 00:21:30.464 job0: (groupid=0, jobs=1): err= 0: pid=78376: Wed Nov 6 14:25:57 2024 00:21:30.464 write: IOPS=255, BW=64.0MiB/s (67.1MB/s)(651MiB/10167msec); 0 zone resets 00:21:30.464 slat (usec): min=18, max=43252, avg=3751.16, stdev=6866.29 00:21:30.464 clat (msec): min=35, max=360, avg=246.23, stdev=53.28 00:21:30.464 lat (msec): min=35, max=360, avg=249.98, stdev=53.65 00:21:30.464 clat percentiles (msec): 00:21:30.464 | 1.00th=[ 115], 5.00th=[ 190], 10.00th=[ 194], 20.00th=[ 203], 00:21:30.464 | 30.00th=[ 205], 40.00th=[ 207], 50.00th=[ 218], 60.00th=[ 279], 00:21:30.464 | 70.00th=[ 296], 80.00th=[ 305], 90.00th=[ 313], 95.00th=[ 317], 00:21:30.464 | 99.00th=[ 338], 99.50th=[ 342], 99.90th=[ 351], 99.95th=[ 359], 00:21:30.464 | 99.99th=[ 363] 00:21:30.464 bw ( KiB/s): min=50688, max=81920, per=9.62%, avg=64965.90, stdev=12956.73, samples=20 00:21:30.464 iops : min= 198, max= 320, avg=253.65, stdev=50.64, samples=20 00:21:30.464 lat (msec) : 50=0.31%, 100=0.61%, 250=53.69%, 500=45.39% 00:21:30.464 cpu : usr=0.84%, sys=0.89%, ctx=2660, majf=0, minf=1 00:21:30.464 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:21:30.464 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:30.464 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:30.464 issued rwts: total=0,2602,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:30.464 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:30.464 job1: (groupid=0, jobs=1): err= 0: pid=78377: Wed Nov 6 14:25:57 2024 00:21:30.464 write: IOPS=254, BW=63.6MiB/s (66.7MB/s)(647MiB/10165msec); 0 zone resets 00:21:30.464 slat (usec): min=18, max=55029, avg=3859.51, stdev=7026.92 00:21:30.464 clat (msec): min=55, max=361, avg=247.42, stdev=53.19 00:21:30.464 lat (msec): min=55, max=361, avg=251.28, stdev=53.60 00:21:30.464 clat percentiles (msec): 00:21:30.464 | 1.00th=[ 144], 5.00th=[ 190], 10.00th=[ 194], 20.00th=[ 203], 00:21:30.464 | 30.00th=[ 205], 40.00th=[ 207], 50.00th=[ 218], 60.00th=[ 284], 00:21:30.464 | 70.00th=[ 296], 80.00th=[ 305], 90.00th=[ 313], 95.00th=[ 321], 00:21:30.464 | 99.00th=[ 347], 99.50th=[ 355], 99.90th=[ 363], 99.95th=[ 363], 00:21:30.464 | 99.99th=[ 363] 00:21:30.464 bw ( KiB/s): min=49152, max=81920, per=9.57%, avg=64613.05, stdev=13164.36, samples=20 00:21:30.464 iops : min= 192, max= 320, avg=252.30, stdev=51.42, samples=20 00:21:30.464 lat (msec) : 100=0.62%, 250=54.21%, 500=45.17% 00:21:30.464 cpu : usr=0.62%, sys=1.02%, ctx=2686, majf=0, minf=1 00:21:30.464 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:21:30.464 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:30.464 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:30.464 issued rwts: total=0,2588,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:30.464 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:30.464 job2: (groupid=0, jobs=1): err= 0: pid=78378: Wed Nov 6 14:25:57 2024 00:21:30.464 write: IOPS=191, BW=47.9MiB/s (50.2MB/s)(491MiB/10251msec); 0 zone resets 00:21:30.464 slat (usec): min=29, max=112095, avg=5087.77, stdev=9478.14 00:21:30.464 clat (msec): min=55, max=582, avg=328.75, stdev=61.77 00:21:30.464 lat (msec): min=55, max=582, avg=333.84, stdev=62.02 00:21:30.464 clat percentiles (msec): 00:21:30.464 | 1.00th=[ 106], 5.00th=[ 275], 10.00th=[ 288], 20.00th=[ 296], 00:21:30.464 | 30.00th=[ 305], 40.00th=[ 309], 50.00th=[ 326], 60.00th=[ 326], 00:21:30.464 | 70.00th=[ 330], 80.00th=[ 338], 90.00th=[ 430], 95.00th=[ 456], 00:21:30.464 | 99.00th=[ 485], 99.50th=[ 542], 99.90th=[ 584], 99.95th=[ 584], 00:21:30.464 | 99.99th=[ 584] 00:21:30.464 bw ( KiB/s): min=34816, max=57229, per=7.20%, avg=48639.40, stdev=6485.17, samples=20 00:21:30.464 iops : min= 136, max= 223, avg=189.85, stdev=25.26, samples=20 00:21:30.464 lat (msec) : 100=0.81%, 250=1.88%, 500=96.59%, 750=0.71% 00:21:30.464 cpu : usr=0.59%, sys=0.51%, ctx=1728, majf=0, minf=1 00:21:30.464 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.8% 00:21:30.464 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:30.464 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:30.464 issued rwts: total=0,1964,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:30.464 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:30.464 job3: (groupid=0, jobs=1): err= 0: pid=78387: Wed Nov 6 14:25:57 2024 00:21:30.464 write: IOPS=188, BW=47.1MiB/s (49.3MB/s)(482MiB/10249msec); 0 zone resets 00:21:30.464 slat (usec): min=14, max=269998, avg=5184.71, stdev=10886.50 00:21:30.464 clat (msec): min=233, max=606, avg=334.68, stdev=57.02 00:21:30.464 lat (msec): min=254, max=606, avg=339.87, stdev=56.87 00:21:30.464 clat percentiles (msec): 00:21:30.464 | 1.00th=[ 275], 5.00th=[ 279], 10.00th=[ 292], 20.00th=[ 296], 00:21:30.464 | 30.00th=[ 305], 40.00th=[ 309], 50.00th=[ 326], 60.00th=[ 326], 00:21:30.464 | 70.00th=[ 330], 80.00th=[ 338], 90.00th=[ 439], 95.00th=[ 472], 00:21:30.464 | 99.00th=[ 542], 99.50th=[ 567], 99.90th=[ 609], 99.95th=[ 609], 00:21:30.464 | 99.99th=[ 609] 00:21:30.464 bw ( KiB/s): min=22573, max=55296, per=7.07%, avg=47725.25, stdev=8709.80, samples=20 00:21:30.464 iops : min= 88, max= 216, avg=186.30, stdev=33.99, samples=20 00:21:30.464 lat (msec) : 250=0.05%, 500=98.24%, 750=1.71% 00:21:30.464 cpu : usr=0.58%, sys=0.74%, ctx=2370, majf=0, minf=1 00:21:30.464 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.7%, >=64=96.7% 00:21:30.464 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:30.464 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:30.464 issued rwts: total=0,1929,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:30.464 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:30.464 job4: (groupid=0, jobs=1): err= 0: pid=78391: Wed Nov 6 14:25:57 2024 00:21:30.464 write: IOPS=366, BW=91.6MiB/s (96.0MB/s)(932MiB/10182msec); 0 zone resets 00:21:30.464 slat (usec): min=18, max=19543, avg=2666.42, stdev=4834.95 00:21:30.464 clat (msec): min=14, max=423, avg=172.00, stdev=52.83 00:21:30.464 lat (msec): min=14, max=423, avg=174.67, stdev=53.43 00:21:30.464 clat percentiles (msec): 00:21:30.464 | 1.00th=[ 56], 5.00th=[ 88], 10.00th=[ 94], 20.00th=[ 150], 00:21:30.464 | 30.00th=[ 155], 40.00th=[ 157], 50.00th=[ 163], 60.00th=[ 165], 00:21:30.464 | 70.00th=[ 169], 80.00th=[ 236], 90.00th=[ 247], 95.00th=[ 251], 00:21:30.464 | 99.00th=[ 284], 99.50th=[ 351], 99.90th=[ 409], 99.95th=[ 426], 00:21:30.464 | 99.99th=[ 426] 00:21:30.464 bw ( KiB/s): min=63872, max=177664, per=13.89%, avg=93806.05, stdev=27544.53, samples=20 00:21:30.464 iops : min= 249, max= 694, avg=366.35, stdev=107.64, samples=20 00:21:30.464 lat (msec) : 20=0.21%, 50=0.67%, 100=10.67%, 250=83.83%, 500=4.61% 00:21:30.464 cpu : usr=0.90%, sys=1.38%, ctx=3988, majf=0, minf=1 00:21:30.464 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:21:30.464 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:30.464 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:30.464 issued rwts: total=0,3729,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:30.464 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:30.464 job5: (groupid=0, jobs=1): err= 0: pid=78392: Wed Nov 6 14:25:57 2024 00:21:30.464 write: IOPS=192, BW=48.1MiB/s (50.4MB/s)(493MiB/10254msec); 0 zone resets 00:21:30.464 slat (usec): min=19, max=69408, avg=5005.06, stdev=9121.67 00:21:30.464 clat (msec): min=56, max=581, avg=327.78, stdev=61.01 00:21:30.464 lat (msec): min=56, max=581, avg=332.78, stdev=61.25 00:21:30.464 clat percentiles (msec): 00:21:30.464 | 1.00th=[ 112], 5.00th=[ 275], 10.00th=[ 288], 20.00th=[ 296], 00:21:30.464 | 30.00th=[ 305], 40.00th=[ 309], 50.00th=[ 326], 60.00th=[ 326], 00:21:30.464 | 70.00th=[ 330], 80.00th=[ 338], 90.00th=[ 409], 95.00th=[ 460], 00:21:30.464 | 99.00th=[ 506], 99.50th=[ 542], 99.90th=[ 584], 99.95th=[ 584], 00:21:30.464 | 99.99th=[ 584] 00:21:30.464 bw ( KiB/s): min=32768, max=55296, per=7.23%, avg=48823.05, stdev=6487.81, samples=20 00:21:30.464 iops : min= 128, max= 216, avg=190.55, stdev=25.30, samples=20 00:21:30.464 lat (msec) : 100=0.86%, 250=1.78%, 500=95.28%, 750=2.08% 00:21:30.464 cpu : usr=0.60%, sys=0.79%, ctx=2152, majf=0, minf=1 00:21:30.464 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.8% 00:21:30.464 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:30.464 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:30.464 issued rwts: total=0,1971,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:30.464 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:30.464 job6: (groupid=0, jobs=1): err= 0: pid=78393: Wed Nov 6 14:25:57 2024 00:21:30.464 write: IOPS=219, BW=54.8MiB/s (57.5MB/s)(558MiB/10173msec); 0 zone resets 00:21:30.464 slat (usec): min=14, max=222264, avg=4296.12, stdev=9126.98 00:21:30.464 clat (msec): min=89, max=549, avg=287.54, stdev=69.16 00:21:30.464 lat (msec): min=89, max=571, avg=291.83, stdev=69.93 00:21:30.464 clat percentiles (msec): 00:21:30.464 | 1.00th=[ 123], 5.00th=[ 220], 10.00th=[ 232], 20.00th=[ 243], 00:21:30.464 | 30.00th=[ 247], 40.00th=[ 251], 50.00th=[ 284], 60.00th=[ 296], 00:21:30.464 | 70.00th=[ 305], 80.00th=[ 326], 90.00th=[ 388], 95.00th=[ 443], 00:21:30.464 | 99.00th=[ 472], 99.50th=[ 502], 99.90th=[ 550], 99.95th=[ 550], 00:21:30.464 | 99.99th=[ 550] 00:21:30.464 bw ( KiB/s): min=26059, max=69632, per=8.21%, avg=55449.90, stdev=11897.23, samples=20 00:21:30.464 iops : min= 101, max= 272, avg=216.45, stdev=46.57, samples=20 00:21:30.464 lat (msec) : 100=0.31%, 250=39.37%, 500=59.78%, 750=0.54% 00:21:30.464 cpu : usr=0.65%, sys=0.87%, ctx=1872, majf=0, minf=1 00:21:30.464 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.4%, >=64=97.2% 00:21:30.464 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:30.464 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:30.464 issued rwts: total=0,2230,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:30.464 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:30.464 job7: (groupid=0, jobs=1): err= 0: pid=78394: Wed Nov 6 14:25:57 2024 00:21:30.464 write: IOPS=191, BW=47.9MiB/s (50.2MB/s)(491MiB/10256msec); 0 zone resets 00:21:30.464 slat (usec): min=15, max=92857, avg=5078.21, stdev=9251.58 00:21:30.464 clat (msec): min=93, max=570, avg=328.97, stdev=56.04 00:21:30.464 lat (msec): min=93, max=570, avg=334.05, stdev=56.21 00:21:30.464 clat percentiles (msec): 00:21:30.464 | 1.00th=[ 176], 5.00th=[ 275], 10.00th=[ 288], 20.00th=[ 296], 00:21:30.464 | 30.00th=[ 305], 40.00th=[ 309], 50.00th=[ 326], 60.00th=[ 326], 00:21:30.464 | 70.00th=[ 330], 80.00th=[ 338], 90.00th=[ 405], 95.00th=[ 456], 00:21:30.464 | 99.00th=[ 502], 99.50th=[ 531], 99.90th=[ 567], 99.95th=[ 575], 00:21:30.464 | 99.99th=[ 575] 00:21:30.464 bw ( KiB/s): min=34816, max=55406, per=7.20%, avg=48625.10, stdev=6658.05, samples=20 00:21:30.465 iops : min= 136, max= 216, avg=189.80, stdev=25.95, samples=20 00:21:30.465 lat (msec) : 100=0.20%, 250=1.63%, 500=97.00%, 750=1.17% 00:21:30.465 cpu : usr=0.63%, sys=0.72%, ctx=2323, majf=0, minf=1 00:21:30.465 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.8% 00:21:30.465 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:30.465 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:30.465 issued rwts: total=0,1964,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:30.465 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:30.465 job8: (groupid=0, jobs=1): err= 0: pid=78395: Wed Nov 6 14:25:57 2024 00:21:30.465 write: IOPS=335, BW=83.9MiB/s (88.0MB/s)(853MiB/10168msec); 0 zone resets 00:21:30.465 slat (usec): min=13, max=163245, avg=2791.86, stdev=5856.81 00:21:30.465 clat (msec): min=10, max=486, avg=187.85, stdev=62.42 00:21:30.465 lat (msec): min=11, max=486, avg=190.64, stdev=63.10 00:21:30.465 clat percentiles (msec): 00:21:30.465 | 1.00th=[ 33], 5.00th=[ 138], 10.00th=[ 146], 20.00th=[ 153], 00:21:30.465 | 30.00th=[ 157], 40.00th=[ 161], 50.00th=[ 163], 60.00th=[ 167], 00:21:30.465 | 70.00th=[ 228], 80.00th=[ 245], 90.00th=[ 249], 95.00th=[ 259], 00:21:30.465 | 99.00th=[ 409], 99.50th=[ 422], 99.90th=[ 464], 99.95th=[ 489], 00:21:30.465 | 99.99th=[ 489] 00:21:30.465 bw ( KiB/s): min=33280, max=110080, per=12.69%, avg=85688.60, stdev=21604.41, samples=20 00:21:30.465 iops : min= 130, max= 430, avg=334.65, stdev=84.40, samples=20 00:21:30.465 lat (msec) : 20=0.38%, 50=1.26%, 100=1.88%, 250=87.54%, 500=8.94% 00:21:30.465 cpu : usr=0.99%, sys=1.13%, ctx=3852, majf=0, minf=1 00:21:30.465 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:21:30.465 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:30.465 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:30.465 issued rwts: total=0,3412,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:30.465 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:30.465 job9: (groupid=0, jobs=1): err= 0: pid=78396: Wed Nov 6 14:25:57 2024 00:21:30.465 write: IOPS=256, BW=64.1MiB/s (67.2MB/s)(652MiB/10170msec); 0 zone resets 00:21:30.465 slat (usec): min=23, max=57157, avg=3828.76, stdev=6939.47 00:21:30.465 clat (msec): min=40, max=366, avg=245.63, stdev=52.72 00:21:30.465 lat (msec): min=41, max=366, avg=249.46, stdev=53.12 00:21:30.465 clat percentiles (msec): 00:21:30.465 | 1.00th=[ 121], 5.00th=[ 190], 10.00th=[ 194], 20.00th=[ 203], 00:21:30.465 | 30.00th=[ 205], 40.00th=[ 207], 50.00th=[ 218], 60.00th=[ 279], 00:21:30.465 | 70.00th=[ 292], 80.00th=[ 300], 90.00th=[ 313], 95.00th=[ 317], 00:21:30.465 | 99.00th=[ 351], 99.50th=[ 355], 99.90th=[ 355], 99.95th=[ 368], 00:21:30.465 | 99.99th=[ 368] 00:21:30.465 bw ( KiB/s): min=45056, max=81920, per=9.65%, avg=65119.90, stdev=13003.73, samples=20 00:21:30.465 iops : min= 176, max= 320, avg=254.25, stdev=50.83, samples=20 00:21:30.465 lat (msec) : 50=0.15%, 100=0.61%, 250=54.87%, 500=44.36% 00:21:30.465 cpu : usr=0.74%, sys=0.95%, ctx=2776, majf=0, minf=1 00:21:30.465 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:21:30.465 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:30.465 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:30.465 issued rwts: total=0,2608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:30.465 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:30.465 job10: (groupid=0, jobs=1): err= 0: pid=78397: Wed Nov 6 14:25:57 2024 00:21:30.465 write: IOPS=199, BW=50.0MiB/s (52.4MB/s)(513MiB/10251msec); 0 zone resets 00:21:30.465 slat (usec): min=20, max=99507, avg=4706.58, stdev=8940.09 00:21:30.465 clat (msec): min=79, max=567, avg=315.05, stdev=53.78 00:21:30.465 lat (msec): min=85, max=567, avg=319.76, stdev=54.01 00:21:30.465 clat percentiles (msec): 00:21:30.465 | 1.00th=[ 121], 5.00th=[ 253], 10.00th=[ 279], 20.00th=[ 292], 00:21:30.465 | 30.00th=[ 300], 40.00th=[ 305], 50.00th=[ 313], 60.00th=[ 321], 00:21:30.465 | 70.00th=[ 326], 80.00th=[ 330], 90.00th=[ 355], 95.00th=[ 422], 00:21:30.465 | 99.00th=[ 481], 99.50th=[ 502], 99.90th=[ 550], 99.95th=[ 567], 00:21:30.465 | 99.99th=[ 567] 00:21:30.465 bw ( KiB/s): min=36864, max=60928, per=7.53%, avg=50816.05, stdev=4984.18, samples=20 00:21:30.465 iops : min= 144, max= 238, avg=198.35, stdev=19.48, samples=20 00:21:30.465 lat (msec) : 100=0.20%, 250=4.63%, 500=94.49%, 750=0.68% 00:21:30.465 cpu : usr=0.57%, sys=0.60%, ctx=2154, majf=0, minf=1 00:21:30.465 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.9% 00:21:30.465 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:30.465 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:30.465 issued rwts: total=0,2050,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:30.465 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:30.465 00:21:30.465 Run status group 0 (all jobs): 00:21:30.465 WRITE: bw=659MiB/s (691MB/s), 47.1MiB/s-91.6MiB/s (49.3MB/s-96.0MB/s), io=6762MiB (7090MB), run=10165-10256msec 00:21:30.465 00:21:30.465 Disk stats (read/write): 00:21:30.465 nvme0n1: ios=50/5064, merge=0/0, ticks=80/1209051, in_queue=1209131, util=97.92% 00:21:30.465 nvme10n1: ios=49/5034, merge=0/0, ticks=81/1208411, in_queue=1208492, util=98.02% 00:21:30.465 nvme1n1: ios=49/3913, merge=0/0, ticks=76/1236556, in_queue=1236632, util=98.26% 00:21:30.465 nvme2n1: ios=48/3832, merge=0/0, ticks=72/1234252, in_queue=1234324, util=98.24% 00:21:30.465 nvme3n1: ios=47/7327, merge=0/0, ticks=51/1207034, in_queue=1207085, util=98.40% 00:21:30.465 nvme4n1: ios=22/3925, merge=0/0, ticks=51/1236665, in_queue=1236716, util=98.26% 00:21:30.465 nvme5n1: ios=13/4318, merge=0/0, ticks=34/1207064, in_queue=1207098, util=98.11% 00:21:30.465 nvme6n1: ios=5/3903, merge=0/0, ticks=35/1236064, in_queue=1236099, util=98.33% 00:21:30.465 nvme7n1: ios=0/6681, merge=0/0, ticks=0/1205779, in_queue=1205779, util=98.48% 00:21:30.465 nvme8n1: ios=0/5082, merge=0/0, ticks=0/1209744, in_queue=1209744, util=98.79% 00:21:30.465 nvme9n1: ios=0/4074, merge=0/0, ticks=0/1236291, in_queue=1236291, util=98.66% 00:21:30.465 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:21:30.465 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:21:30.465 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:30.465 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:30.465 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:30.465 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:21:30.465 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:21:30.465 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:21:30.465 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK1 00:21:30.465 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK1 00:21:30.465 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:21:30.465 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:21:30.465 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:30.465 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.465 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:30.465 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.465 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:30.465 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:21:30.465 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:21:30.465 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:21:30.465 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:21:30.465 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:21:30.465 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK2 00:21:30.465 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:21:30.465 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK2 00:21:30.465 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:21:30.465 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:30.465 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.465 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:30.465 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.465 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:30.465 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:21:30.465 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:21:30.465 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:21:30.465 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:21:30.465 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:21:30.465 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK3 00:21:30.465 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:21:30.465 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK3 00:21:30.465 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:21:30.465 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:21:30.465 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.465 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:30.465 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.465 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:30.465 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:21:30.465 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:21:30.466 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:21:30.466 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:21:30.466 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:21:30.466 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK4 00:21:30.466 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:21:30.466 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK4 00:21:30.466 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:21:30.466 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:21:30.466 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.466 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:30.466 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.466 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:30.466 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:21:30.466 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:21:30.466 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:21:30.466 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:21:30.466 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:21:30.466 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK5 00:21:30.466 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK5 00:21:30.466 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:21:30.466 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:21:30.466 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:21:30.466 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.466 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:30.466 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.466 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:30.466 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:21:30.466 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:21:30.466 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:21:30.466 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:21:30.466 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:21:30.466 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK6 00:21:30.466 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:21:30.466 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK6 00:21:30.466 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:21:30.466 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:21:30.466 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.466 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:30.466 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.466 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:30.466 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:21:30.726 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:21:30.726 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:21:30.726 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:21:30.726 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:21:30.726 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK7 00:21:30.726 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:21:30.726 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK7 00:21:30.726 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:21:30.726 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:21:30.726 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.726 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:30.726 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.726 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:30.726 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:21:30.726 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:21:30.726 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:21:30.726 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:21:30.726 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK8 00:21:30.726 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:21:30.726 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK8 00:21:30.726 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:21:30.726 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:21:30.726 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:21:30.726 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.726 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:30.726 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.726 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:30.726 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:21:30.726 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:21:30.726 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:21:30.726 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:21:30.726 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:21:30.726 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK9 00:21:30.983 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:21:30.983 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK9 00:21:30.983 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:21:30.983 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:21:30.983 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.983 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:30.983 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.983 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:30.983 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:21:30.983 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:21:30.983 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:21:30.983 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:21:30.983 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:21:30.983 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK10 00:21:30.983 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK10 00:21:30.983 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:21:30.983 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:21:30.984 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:21:30.984 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.984 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:30.984 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.984 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:30.984 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:21:30.984 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:21:30.984 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:21:30.984 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:21:30.984 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK11 00:21:30.984 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:21:30.984 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK11 00:21:30.984 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:21:31.241 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:21:31.241 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:21:31.241 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.241 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:31.241 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.241 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:21:31.241 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:21:31.241 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:21:31.241 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:31.241 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:21:31.241 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:31.241 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:21:31.241 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:31.241 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:31.241 rmmod nvme_tcp 00:21:31.241 rmmod nvme_fabrics 00:21:31.241 rmmod nvme_keyring 00:21:31.241 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:31.241 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:21:31.241 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:21:31.241 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@517 -- # '[' -n 77701 ']' 00:21:31.241 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@518 -- # killprocess 77701 00:21:31.241 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@952 -- # '[' -z 77701 ']' 00:21:31.241 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # kill -0 77701 00:21:31.241 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@957 -- # uname 00:21:31.241 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:31.241 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 77701 00:21:31.241 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:31.241 killing process with pid 77701 00:21:31.241 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:31.242 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@970 -- # echo 'killing process with pid 77701' 00:21:31.242 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@971 -- # kill 77701 00:21:31.242 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@976 -- # wait 77701 00:21:35.463 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:35.463 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:35.463 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:35.463 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:21:35.463 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-save 00:21:35.463 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:35.463 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-restore 00:21:35.463 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:35.463 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:35.463 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:35.463 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:35.463 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:35.463 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:35.463 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:35.463 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:35.463 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:35.463 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:35.463 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:35.463 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:35.463 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:35.463 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:35.463 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:35.463 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:35.463 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:35.463 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:35.463 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:35.463 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@300 -- # return 0 00:21:35.463 00:21:35.463 real 0m54.566s 00:21:35.463 user 3m8.644s 00:21:35.463 sys 0m27.078s 00:21:35.463 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:35.463 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:35.463 ************************************ 00:21:35.463 END TEST nvmf_multiconnection 00:21:35.463 ************************************ 00:21:35.463 14:26:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:21:35.463 14:26:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:35.463 14:26:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:35.463 14:26:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:35.463 ************************************ 00:21:35.463 START TEST nvmf_initiator_timeout 00:21:35.463 ************************************ 00:21:35.463 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:21:35.463 * Looking for test storage... 00:21:35.463 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:35.463 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:35.463 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1691 -- # lcov --version 00:21:35.463 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:35.463 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:35.463 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:35.463 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:35.463 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:35.463 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:21:35.463 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:21:35.463 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:21:35.463 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:21:35.463 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:21:35.463 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:21:35.463 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:21:35.463 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:35.463 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:21:35.463 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:21:35.463 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:35.463 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:35.463 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:21:35.463 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:21:35.463 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:35.463 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:21:35.463 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:21:35.463 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:21:35.463 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:21:35.463 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:35.463 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:21:35.463 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:21:35.463 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:35.463 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:35.463 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:21:35.463 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:35.463 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:35.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:35.463 --rc genhtml_branch_coverage=1 00:21:35.463 --rc genhtml_function_coverage=1 00:21:35.463 --rc genhtml_legend=1 00:21:35.463 --rc geninfo_all_blocks=1 00:21:35.463 --rc geninfo_unexecuted_blocks=1 00:21:35.463 00:21:35.464 ' 00:21:35.464 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:35.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:35.464 --rc genhtml_branch_coverage=1 00:21:35.464 --rc genhtml_function_coverage=1 00:21:35.464 --rc genhtml_legend=1 00:21:35.464 --rc geninfo_all_blocks=1 00:21:35.464 --rc geninfo_unexecuted_blocks=1 00:21:35.464 00:21:35.464 ' 00:21:35.464 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:35.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:35.464 --rc genhtml_branch_coverage=1 00:21:35.464 --rc genhtml_function_coverage=1 00:21:35.464 --rc genhtml_legend=1 00:21:35.464 --rc geninfo_all_blocks=1 00:21:35.464 --rc geninfo_unexecuted_blocks=1 00:21:35.464 00:21:35.464 ' 00:21:35.464 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:35.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:35.464 --rc genhtml_branch_coverage=1 00:21:35.464 --rc genhtml_function_coverage=1 00:21:35.464 --rc genhtml_legend=1 00:21:35.464 --rc geninfo_all_blocks=1 00:21:35.464 --rc geninfo_unexecuted_blocks=1 00:21:35.464 00:21:35.464 ' 00:21:35.464 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:35.464 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:21:35.464 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:35.464 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:35.464 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:35.464 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:35.464 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:35.464 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:35.464 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:35.464 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:35.464 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:35.464 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:35.464 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:21:35.464 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:21:35.464 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:35.464 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:35.464 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:35.464 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:35.464 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:35.464 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:21:35.464 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:35.464 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:35.464 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:35.464 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.464 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.464 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.464 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:21:35.464 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.464 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:21:35.464 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:35.464 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:35.464 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:35.464 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:35.464 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:35.464 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:35.464 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:35.464 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:35.464 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:35.464 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:35.464 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:35.464 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:35.464 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:21:35.464 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:35.464 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:35.464 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:35.464 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:35.464 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:35.464 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:35.464 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:35.464 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:35.464 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:35.464 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:35.464 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:35.464 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:35.464 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:35.464 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:35.464 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:35.464 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:35.464 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:35.464 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:35.464 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:35.464 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:35.464 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:35.464 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:35.464 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:35.464 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:35.464 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:35.464 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:35.464 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:35.464 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:35.465 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:35.465 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:35.465 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:35.465 Cannot find device "nvmf_init_br" 00:21:35.465 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # true 00:21:35.465 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:35.465 Cannot find device "nvmf_init_br2" 00:21:35.465 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # true 00:21:35.465 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:35.465 Cannot find device "nvmf_tgt_br" 00:21:35.465 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@164 -- # true 00:21:35.465 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:35.465 Cannot find device "nvmf_tgt_br2" 00:21:35.465 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@165 -- # true 00:21:35.465 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:35.465 Cannot find device "nvmf_init_br" 00:21:35.465 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@166 -- # true 00:21:35.465 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:35.465 Cannot find device "nvmf_init_br2" 00:21:35.465 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@167 -- # true 00:21:35.465 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:35.465 Cannot find device "nvmf_tgt_br" 00:21:35.465 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@168 -- # true 00:21:35.465 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:35.465 Cannot find device "nvmf_tgt_br2" 00:21:35.465 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@169 -- # true 00:21:35.465 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:35.723 Cannot find device "nvmf_br" 00:21:35.723 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@170 -- # true 00:21:35.723 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:35.723 Cannot find device "nvmf_init_if" 00:21:35.723 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@171 -- # true 00:21:35.723 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:35.723 Cannot find device "nvmf_init_if2" 00:21:35.723 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@172 -- # true 00:21:35.723 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:35.723 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:35.723 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@173 -- # true 00:21:35.723 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:35.723 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:35.723 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@174 -- # true 00:21:35.723 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:35.723 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:35.723 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:35.723 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:35.723 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:35.723 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:35.723 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:35.723 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:35.723 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:35.723 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:35.724 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:35.724 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:35.724 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:35.724 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:35.724 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:35.724 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:35.724 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:35.724 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:35.724 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:35.724 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:35.724 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:35.982 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:35.982 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:35.982 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:35.982 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:35.982 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:35.982 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:35.982 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:35.982 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:35.982 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:35.982 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:35.982 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:35.982 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:35.982 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:35.982 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.109 ms 00:21:35.982 00:21:35.982 --- 10.0.0.3 ping statistics --- 00:21:35.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:35.982 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:21:35.982 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:35.982 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:35.982 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.085 ms 00:21:35.982 00:21:35.982 --- 10.0.0.4 ping statistics --- 00:21:35.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:35.982 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:21:35.982 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:35.982 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:35.982 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:21:35.982 00:21:35.982 --- 10.0.0.1 ping statistics --- 00:21:35.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:35.982 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:21:35.982 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:35.982 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:35.982 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:21:35.982 00:21:35.982 --- 10.0.0.2 ping statistics --- 00:21:35.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:35.982 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:21:35.982 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:35.982 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@461 -- # return 0 00:21:35.982 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:35.982 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:35.982 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:35.982 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:35.982 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:35.982 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:35.982 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:35.982 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:21:35.982 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:35.982 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:35.982 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:35.982 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:35.982 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@509 -- # nvmfpid=78865 00:21:35.982 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@510 -- # waitforlisten 78865 00:21:35.982 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@833 -- # '[' -z 78865 ']' 00:21:35.982 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:35.982 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:35.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:35.982 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:35.982 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:35.983 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:36.240 [2024-11-06 14:26:03.656023] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:21:36.240 [2024-11-06 14:26:03.656150] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:36.240 [2024-11-06 14:26:03.843197] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:36.499 [2024-11-06 14:26:03.996130] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:36.499 [2024-11-06 14:26:03.996184] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:36.499 [2024-11-06 14:26:03.996201] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:36.499 [2024-11-06 14:26:03.996212] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:36.499 [2024-11-06 14:26:03.996226] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:36.499 [2024-11-06 14:26:03.998792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:36.499 [2024-11-06 14:26:03.998985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:36.499 [2024-11-06 14:26:03.999063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:36.499 [2024-11-06 14:26:03.999099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:36.756 [2024-11-06 14:26:04.248676] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:37.015 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:37.015 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@866 -- # return 0 00:21:37.015 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:37.015 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:37.015 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:37.015 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:37.015 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:21:37.015 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:37.015 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.015 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:37.015 Malloc0 00:21:37.015 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.015 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:21:37.015 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.015 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:37.273 Delay0 00:21:37.273 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.273 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:37.273 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.273 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:37.273 [2024-11-06 14:26:04.660597] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:37.273 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.273 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:21:37.273 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.273 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:37.273 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.273 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:21:37.273 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.273 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:37.273 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.273 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:37.273 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.273 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:37.273 [2024-11-06 14:26:04.704895] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:37.273 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.273 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid=406d54d0-5e94-472a-a2b3-4291f3ac81e0 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:21:37.273 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:21:37.273 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # local i=0 00:21:37.273 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:21:37.273 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:21:37.273 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # sleep 2 00:21:39.804 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:21:39.804 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:21:39.804 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:21:39.804 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:21:39.804 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:21:39.804 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1210 -- # return 0 00:21:39.804 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=78932 00:21:39.804 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:21:39.804 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:21:39.804 [global] 00:21:39.804 thread=1 00:21:39.804 invalidate=1 00:21:39.804 rw=write 00:21:39.804 time_based=1 00:21:39.804 runtime=60 00:21:39.804 ioengine=libaio 00:21:39.804 direct=1 00:21:39.804 bs=4096 00:21:39.804 iodepth=1 00:21:39.804 norandommap=0 00:21:39.804 numjobs=1 00:21:39.804 00:21:39.804 verify_dump=1 00:21:39.804 verify_backlog=512 00:21:39.804 verify_state_save=0 00:21:39.804 do_verify=1 00:21:39.804 verify=crc32c-intel 00:21:39.804 [job0] 00:21:39.804 filename=/dev/nvme0n1 00:21:39.804 Could not set queue depth (nvme0n1) 00:21:39.804 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:39.804 fio-3.35 00:21:39.804 Starting 1 thread 00:21:42.354 14:26:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:21:42.354 14:26:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.354 14:26:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:42.354 true 00:21:42.354 14:26:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.354 14:26:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:21:42.354 14:26:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.354 14:26:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:42.354 true 00:21:42.354 14:26:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.354 14:26:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:21:42.354 14:26:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.354 14:26:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:42.354 true 00:21:42.354 14:26:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.354 14:26:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:21:42.354 14:26:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.354 14:26:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:42.354 true 00:21:42.354 14:26:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.354 14:26:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:21:45.642 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:21:45.642 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.642 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:45.642 true 00:21:45.642 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.642 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:21:45.642 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.642 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:45.642 true 00:21:45.642 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.642 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:21:45.642 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.642 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:45.642 true 00:21:45.642 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.642 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:21:45.643 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.643 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:45.643 true 00:21:45.643 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.643 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:21:45.643 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 78932 00:22:41.882 00:22:41.882 job0: (groupid=0, jobs=1): err= 0: pid=78953: Wed Nov 6 14:27:07 2024 00:22:41.882 read: IOPS=812, BW=3251KiB/s (3329kB/s)(190MiB/60000msec) 00:22:41.882 slat (usec): min=7, max=11944, avg= 9.19, stdev=66.83 00:22:41.882 clat (usec): min=147, max=40572k, avg=1045.15, stdev=183731.70 00:22:41.882 lat (usec): min=159, max=40572k, avg=1054.34, stdev=183731.71 00:22:41.882 clat percentiles (usec): 00:22:41.882 | 1.00th=[ 167], 5.00th=[ 176], 10.00th=[ 182], 20.00th=[ 190], 00:22:41.882 | 30.00th=[ 198], 40.00th=[ 204], 50.00th=[ 210], 60.00th=[ 217], 00:22:41.882 | 70.00th=[ 225], 80.00th=[ 235], 90.00th=[ 247], 95.00th=[ 260], 00:22:41.882 | 99.00th=[ 285], 99.50th=[ 297], 99.90th=[ 334], 99.95th=[ 383], 00:22:41.882 | 99.99th=[ 832] 00:22:41.882 write: IOPS=819, BW=3277KiB/s (3355kB/s)(192MiB/60000msec); 0 zone resets 00:22:41.882 slat (usec): min=9, max=3468, avg=13.31, stdev=22.06 00:22:41.883 clat (usec): min=8, max=2368, avg=159.54, stdev=32.31 00:22:41.883 lat (usec): min=125, max=3510, avg=172.84, stdev=39.49 00:22:41.883 clat percentiles (usec): 00:22:41.883 | 1.00th=[ 122], 5.00th=[ 129], 10.00th=[ 135], 20.00th=[ 141], 00:22:41.883 | 30.00th=[ 147], 40.00th=[ 151], 50.00th=[ 157], 60.00th=[ 161], 00:22:41.883 | 70.00th=[ 167], 80.00th=[ 176], 90.00th=[ 188], 95.00th=[ 198], 00:22:41.883 | 99.00th=[ 229], 99.50th=[ 243], 99.90th=[ 412], 99.95th=[ 603], 00:22:41.883 | 99.99th=[ 1647] 00:22:41.883 bw ( KiB/s): min= 4096, max=12288, per=100.00%, avg=9870.77, stdev=1833.42, samples=39 00:22:41.883 iops : min= 1024, max= 3072, avg=2467.67, stdev=458.33, samples=39 00:22:41.883 lat (usec) : 10=0.01%, 250=95.47%, 500=4.48%, 750=0.03%, 1000=0.01% 00:22:41.883 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, >=2000=0.01% 00:22:41.883 cpu : usr=0.40%, sys=1.49%, ctx=97922, majf=0, minf=5 00:22:41.883 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:41.883 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:41.883 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:41.883 issued rwts: total=48763,49152,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:41.883 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:41.883 00:22:41.883 Run status group 0 (all jobs): 00:22:41.883 READ: bw=3251KiB/s (3329kB/s), 3251KiB/s-3251KiB/s (3329kB/s-3329kB/s), io=190MiB (200MB), run=60000-60000msec 00:22:41.883 WRITE: bw=3277KiB/s (3355kB/s), 3277KiB/s-3277KiB/s (3355kB/s-3355kB/s), io=192MiB (201MB), run=60000-60000msec 00:22:41.883 00:22:41.883 Disk stats (read/write): 00:22:41.883 nvme0n1: ios=48845/48739, merge=0/0, ticks=10785/8155, in_queue=18940, util=99.71% 00:22:41.883 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:22:41.883 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:41.883 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:22:41.883 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1221 -- # local i=0 00:22:41.883 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:22:41.883 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:22:41.883 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:22:41.883 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:22:41.883 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1233 -- # return 0 00:22:41.883 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:22:41.883 nvmf hotplug test: fio successful as expected 00:22:41.883 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:22:41.883 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:41.883 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.883 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:41.883 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.883 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:22:41.883 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:22:41.883 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:22:41.883 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:41.883 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:22:41.883 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:41.883 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:22:41.883 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:41.883 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:41.883 rmmod nvme_tcp 00:22:41.883 rmmod nvme_fabrics 00:22:41.883 rmmod nvme_keyring 00:22:41.883 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:41.883 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:22:41.883 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:22:41.883 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@517 -- # '[' -n 78865 ']' 00:22:41.883 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@518 -- # killprocess 78865 00:22:41.883 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # '[' -z 78865 ']' 00:22:41.883 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # kill -0 78865 00:22:41.883 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@957 -- # uname 00:22:41.883 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:41.883 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 78865 00:22:41.883 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:41.883 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:41.883 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@970 -- # echo 'killing process with pid 78865' 00:22:41.883 killing process with pid 78865 00:22:41.883 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@971 -- # kill 78865 00:22:41.883 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@976 -- # wait 78865 00:22:41.883 14:27:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:41.883 14:27:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:41.883 14:27:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:41.883 14:27:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:22:41.883 14:27:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-save 00:22:41.883 14:27:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:22:41.883 14:27:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:41.883 14:27:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:41.883 14:27:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:41.883 14:27:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:41.883 14:27:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:41.883 14:27:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:41.883 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:41.883 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:41.883 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:41.883 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:41.883 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:41.883 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:41.883 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:41.883 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:41.883 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:41.883 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:41.883 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:41.883 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:41.883 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:41.883 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:41.883 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@300 -- # return 0 00:22:41.883 ************************************ 00:22:41.883 END TEST nvmf_initiator_timeout 00:22:41.883 ************************************ 00:22:41.883 00:22:41.883 real 1m6.568s 00:22:41.883 user 3m56.093s 00:22:41.883 sys 0m23.325s 00:22:41.883 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:41.883 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:41.883 14:27:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:22:41.883 14:27:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:22:41.883 14:27:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:22:41.883 14:27:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:41.883 14:27:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:41.883 ************************************ 00:22:41.883 START TEST nvmf_nsid 00:22:41.883 ************************************ 00:22:41.883 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:22:41.883 * Looking for test storage... 00:22:41.883 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:41.883 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:41.883 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lcov --version 00:22:41.883 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:42.145 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:42.145 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:42.145 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:42.145 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:42.145 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:22:42.145 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:22:42.145 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:22:42.145 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:22:42.145 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:22:42.145 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:22:42.145 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:22:42.145 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:42.145 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:22:42.145 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:22:42.145 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:42.145 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:42.145 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:22:42.145 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:22:42.145 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:42.145 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:22:42.145 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:22:42.145 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:22:42.145 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:22:42.145 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:42.145 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:22:42.145 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:22:42.145 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:42.145 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:42.145 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:22:42.145 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:42.145 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:42.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:42.145 --rc genhtml_branch_coverage=1 00:22:42.145 --rc genhtml_function_coverage=1 00:22:42.145 --rc genhtml_legend=1 00:22:42.145 --rc geninfo_all_blocks=1 00:22:42.145 --rc geninfo_unexecuted_blocks=1 00:22:42.145 00:22:42.145 ' 00:22:42.145 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:42.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:42.145 --rc genhtml_branch_coverage=1 00:22:42.145 --rc genhtml_function_coverage=1 00:22:42.145 --rc genhtml_legend=1 00:22:42.145 --rc geninfo_all_blocks=1 00:22:42.145 --rc geninfo_unexecuted_blocks=1 00:22:42.145 00:22:42.145 ' 00:22:42.145 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:42.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:42.145 --rc genhtml_branch_coverage=1 00:22:42.145 --rc genhtml_function_coverage=1 00:22:42.145 --rc genhtml_legend=1 00:22:42.145 --rc geninfo_all_blocks=1 00:22:42.145 --rc geninfo_unexecuted_blocks=1 00:22:42.145 00:22:42.145 ' 00:22:42.145 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:42.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:42.145 --rc genhtml_branch_coverage=1 00:22:42.145 --rc genhtml_function_coverage=1 00:22:42.145 --rc genhtml_legend=1 00:22:42.145 --rc geninfo_all_blocks=1 00:22:42.145 --rc geninfo_unexecuted_blocks=1 00:22:42.145 00:22:42.145 ' 00:22:42.145 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:42.145 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:22:42.145 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:42.145 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:42.145 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:42.145 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:42.145 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:42.145 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:42.145 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:42.145 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:42.145 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:42.145 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:42.145 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:22:42.145 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:22:42.145 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:42.145 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:42.145 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:42.145 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:42.145 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:42.145 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:22:42.145 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:42.145 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:42.145 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:42.145 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.145 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.145 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.145 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:22:42.145 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.145 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:22:42.145 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:42.145 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:42.145 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:42.145 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:42.145 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:42.145 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:42.145 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:42.145 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:42.145 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:42.145 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:42.145 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:22:42.145 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:22:42.145 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:22:42.145 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:22:42.145 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:22:42.145 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:22:42.146 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:42.146 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:42.146 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:42.146 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:42.146 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:42.146 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:42.146 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:42.146 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:42.146 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:22:42.146 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:22:42.146 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:22:42.146 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:22:42.146 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:22:42.146 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@460 -- # nvmf_veth_init 00:22:42.146 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:42.146 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:22:42.146 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:22:42.146 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:42.146 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:42.146 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:22:42.146 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:42.146 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:22:42.146 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:42.146 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:22:42.146 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:42.146 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:42.146 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:42.146 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:42.146 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:42.146 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:42.146 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:22:42.146 Cannot find device "nvmf_init_br" 00:22:42.146 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # true 00:22:42.146 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:22:42.146 Cannot find device "nvmf_init_br2" 00:22:42.146 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # true 00:22:42.146 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:22:42.146 Cannot find device "nvmf_tgt_br" 00:22:42.146 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # true 00:22:42.146 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:22:42.146 Cannot find device "nvmf_tgt_br2" 00:22:42.146 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # true 00:22:42.146 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:22:42.146 Cannot find device "nvmf_init_br" 00:22:42.146 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # true 00:22:42.146 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:22:42.146 Cannot find device "nvmf_init_br2" 00:22:42.146 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # true 00:22:42.146 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:22:42.146 Cannot find device "nvmf_tgt_br" 00:22:42.146 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # true 00:22:42.146 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:22:42.146 Cannot find device "nvmf_tgt_br2" 00:22:42.146 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # true 00:22:42.146 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:22:42.146 Cannot find device "nvmf_br" 00:22:42.146 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # true 00:22:42.146 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:22:42.405 Cannot find device "nvmf_init_if" 00:22:42.405 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # true 00:22:42.405 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:22:42.405 Cannot find device "nvmf_init_if2" 00:22:42.406 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # true 00:22:42.406 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:42.406 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:42.406 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # true 00:22:42.406 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:42.406 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:42.406 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # true 00:22:42.406 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:22:42.406 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:42.406 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:22:42.406 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:42.406 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:42.406 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:42.406 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:42.406 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:42.406 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:22:42.406 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:22:42.406 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:22:42.406 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:22:42.406 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:22:42.406 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:22:42.406 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:22:42.406 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:22:42.406 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:22:42.406 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:42.406 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:42.406 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:42.406 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:22:42.406 14:27:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:22:42.406 14:27:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:22:42.665 14:27:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:22:42.665 14:27:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:42.665 14:27:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:42.665 14:27:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:42.665 14:27:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:22:42.665 14:27:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:22:42.665 14:27:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:22:42.665 14:27:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:42.665 14:27:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:42.665 14:27:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:22:42.665 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:42.665 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.103 ms 00:22:42.665 00:22:42.665 --- 10.0.0.3 ping statistics --- 00:22:42.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:42.665 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:22:42.665 14:27:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:22:42.665 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:42.665 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.034 ms 00:22:42.665 00:22:42.665 --- 10.0.0.4 ping statistics --- 00:22:42.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:42.665 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:22:42.665 14:27:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:42.665 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:42.665 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:22:42.665 00:22:42.665 --- 10.0.0.1 ping statistics --- 00:22:42.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:42.665 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:22:42.665 14:27:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:22:42.665 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:42.665 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:22:42.665 00:22:42.665 --- 10.0.0.2 ping statistics --- 00:22:42.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:42.665 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:22:42.665 14:27:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:42.665 14:27:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@461 -- # return 0 00:22:42.665 14:27:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:42.665 14:27:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:42.665 14:27:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:42.665 14:27:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:42.665 14:27:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:42.665 14:27:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:42.665 14:27:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:42.665 14:27:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:22:42.665 14:27:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:42.665 14:27:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:42.665 14:27:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:42.665 14:27:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=79834 00:22:42.665 14:27:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:22:42.665 14:27:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 79834 00:22:42.665 14:27:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 79834 ']' 00:22:42.665 14:27:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:42.665 14:27:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:42.665 14:27:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:42.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:42.665 14:27:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:42.665 14:27:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:42.925 [2024-11-06 14:27:10.300212] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:22:42.925 [2024-11-06 14:27:10.300362] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:42.925 [2024-11-06 14:27:10.487692] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:43.183 [2024-11-06 14:27:10.634239] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:43.183 [2024-11-06 14:27:10.634300] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:43.183 [2024-11-06 14:27:10.634317] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:43.183 [2024-11-06 14:27:10.634339] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:43.183 [2024-11-06 14:27:10.634353] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:43.183 [2024-11-06 14:27:10.635759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:43.442 [2024-11-06 14:27:10.883279] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:43.702 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:43.702 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:22:43.702 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:43.702 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:43.702 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:43.702 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:43.702 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:22:43.702 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:22:43.702 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=79866 00:22:43.702 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.3 00:22:43.702 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:22:43.702 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:22:43.702 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:43.702 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:43.702 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:43.703 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:43.703 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:43.703 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:43.703 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:43.703 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:43.703 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:43.703 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:22:43.703 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:22:43.703 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=d47a4f9f-7236-4c57-b0bb-36d20ceec22a 00:22:43.703 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:22:43.703 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=378709b7-886d-49af-894d-b306624b9e00 00:22:43.703 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:22:43.703 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=f53b72f9-da77-4fcb-adcf-64b3cd50d83d 00:22:43.703 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:22:43.703 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.703 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:43.703 null0 00:22:43.703 null1 00:22:43.703 null2 00:22:43.703 [2024-11-06 14:27:11.243749] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:43.703 [2024-11-06 14:27:11.267966] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:43.703 [2024-11-06 14:27:11.296303] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:22:43.703 [2024-11-06 14:27:11.296570] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79866 ] 00:22:43.703 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.703 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 79866 /var/tmp/tgt2.sock 00:22:43.703 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 79866 ']' 00:22:43.703 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/tgt2.sock 00:22:43.703 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:43.703 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:22:43.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:22:43.703 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:43.703 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:43.962 [2024-11-06 14:27:11.479340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:44.221 [2024-11-06 14:27:11.598388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:44.479 [2024-11-06 14:27:11.856567] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:45.046 14:27:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:45.046 14:27:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:22:45.046 14:27:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:22:45.305 [2024-11-06 14:27:12.818162] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:45.305 [2024-11-06 14:27:12.834341] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:22:45.305 nvme0n1 nvme0n2 00:22:45.305 nvme1n1 00:22:45.305 14:27:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:22:45.305 14:27:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:22:45.305 14:27:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid=406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:22:45.564 14:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:22:45.564 14:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:22:45.564 14:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:22:45.564 14:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:22:45.564 14:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 00:22:45.564 14:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:22:45.564 14:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:22:45.564 14:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:22:45.564 14:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:22:45.564 14:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:22:45.564 14:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # '[' 0 -lt 15 ']' 00:22:45.564 14:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # i=1 00:22:45.564 14:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # sleep 1 00:22:46.501 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:22:46.501 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:22:46.501 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:22:46.501 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:22:46.501 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:22:46.501 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid d47a4f9f-7236-4c57-b0bb-36d20ceec22a 00:22:46.501 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:22:46.501 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:22:46.501 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:22:46.501 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:22:46.501 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:22:46.760 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=d47a4f9f72364c57b0bb36d20ceec22a 00:22:46.760 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo D47A4F9F72364C57B0BB36D20CEEC22A 00:22:46.760 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ D47A4F9F72364C57B0BB36D20CEEC22A == \D\4\7\A\4\F\9\F\7\2\3\6\4\C\5\7\B\0\B\B\3\6\D\2\0\C\E\E\C\2\2\A ]] 00:22:46.760 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:22:46.760 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:22:46.760 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n2 00:22:46.760 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:22:46.760 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:22:46.760 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n2 00:22:46.760 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:22:46.760 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 378709b7-886d-49af-894d-b306624b9e00 00:22:46.760 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:22:46.760 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:22:46.760 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:22:46.760 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:22:46.760 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:22:46.760 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=378709b7886d49af894db306624b9e00 00:22:46.760 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 378709B7886D49AF894DB306624B9E00 00:22:46.760 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 378709B7886D49AF894DB306624B9E00 == \3\7\8\7\0\9\B\7\8\8\6\D\4\9\A\F\8\9\4\D\B\3\0\6\6\2\4\B\9\E\0\0 ]] 00:22:46.760 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:22:46.760 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:22:46.760 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:22:46.760 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n3 00:22:46.760 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:22:46.760 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n3 00:22:46.760 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:22:46.760 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid f53b72f9-da77-4fcb-adcf-64b3cd50d83d 00:22:46.760 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:22:46.760 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:22:46.760 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:22:46.760 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:22:46.760 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:22:46.760 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=f53b72f9da774fcbadcf64b3cd50d83d 00:22:46.760 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo F53B72F9DA774FCBADCF64B3CD50D83D 00:22:46.760 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ F53B72F9DA774FCBADCF64B3CD50D83D == \F\5\3\B\7\2\F\9\D\A\7\7\4\F\C\B\A\D\C\F\6\4\B\3\C\D\5\0\D\8\3\D ]] 00:22:46.760 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:22:47.019 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:22:47.019 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:22:47.019 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 79866 00:22:47.019 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 79866 ']' 00:22:47.019 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 79866 00:22:47.019 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:22:47.019 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:47.019 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 79866 00:22:47.019 killing process with pid 79866 00:22:47.019 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:22:47.019 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:22:47.019 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 79866' 00:22:47.019 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 79866 00:22:47.019 14:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 79866 00:22:49.555 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:22:49.555 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:49.555 14:27:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:22:49.555 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:49.555 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:22:49.555 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:49.555 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:49.555 rmmod nvme_tcp 00:22:49.555 rmmod nvme_fabrics 00:22:49.555 rmmod nvme_keyring 00:22:49.555 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:49.555 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:22:49.555 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:22:49.555 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 79834 ']' 00:22:49.555 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 79834 00:22:49.555 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 79834 ']' 00:22:49.555 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 79834 00:22:49.555 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:22:49.555 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:49.555 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 79834 00:22:49.555 killing process with pid 79834 00:22:49.555 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:49.555 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:49.555 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 79834' 00:22:49.555 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 79834 00:22:49.555 14:27:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 79834 00:22:50.965 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:50.965 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:50.965 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:50.965 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:22:50.965 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:22:50.965 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:50.965 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:22:50.965 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:50.965 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:50.965 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:50.965 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:50.965 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:50.965 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:50.965 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:50.965 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:50.965 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:50.965 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:50.965 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:50.965 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:50.965 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:51.224 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:51.224 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:51.224 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:51.224 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:51.224 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:51.224 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:51.224 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@300 -- # return 0 00:22:51.224 ************************************ 00:22:51.224 END TEST nvmf_nsid 00:22:51.224 ************************************ 00:22:51.224 00:22:51.224 real 0m9.398s 00:22:51.224 user 0m13.611s 00:22:51.224 sys 0m2.764s 00:22:51.224 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:51.224 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:51.224 14:27:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:22:51.224 ************************************ 00:22:51.224 END TEST nvmf_target_extra 00:22:51.224 ************************************ 00:22:51.224 00:22:51.224 real 7m32.628s 00:22:51.224 user 17m37.331s 00:22:51.224 sys 2m17.645s 00:22:51.224 14:27:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:51.224 14:27:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:51.224 14:27:18 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:22:51.224 14:27:18 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:22:51.224 14:27:18 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:51.224 14:27:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:51.224 ************************************ 00:22:51.224 START TEST nvmf_host 00:22:51.225 ************************************ 00:22:51.225 14:27:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:22:51.484 * Looking for test storage... 00:22:51.484 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:22:51.484 14:27:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:51.484 14:27:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lcov --version 00:22:51.484 14:27:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:51.484 14:27:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:51.484 14:27:19 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:51.484 14:27:19 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:51.484 14:27:19 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:51.484 14:27:19 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:22:51.484 14:27:19 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:22:51.484 14:27:19 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:22:51.484 14:27:19 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:22:51.484 14:27:19 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:22:51.484 14:27:19 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:22:51.484 14:27:19 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:22:51.484 14:27:19 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:51.484 14:27:19 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:22:51.484 14:27:19 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:22:51.484 14:27:19 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:51.484 14:27:19 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:51.484 14:27:19 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:22:51.484 14:27:19 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:22:51.484 14:27:19 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:51.484 14:27:19 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:22:51.484 14:27:19 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:22:51.484 14:27:19 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:22:51.484 14:27:19 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:22:51.484 14:27:19 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:51.484 14:27:19 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:22:51.484 14:27:19 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:22:51.484 14:27:19 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:51.484 14:27:19 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:51.484 14:27:19 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:22:51.484 14:27:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:51.484 14:27:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:51.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:51.484 --rc genhtml_branch_coverage=1 00:22:51.484 --rc genhtml_function_coverage=1 00:22:51.484 --rc genhtml_legend=1 00:22:51.484 --rc geninfo_all_blocks=1 00:22:51.484 --rc geninfo_unexecuted_blocks=1 00:22:51.484 00:22:51.484 ' 00:22:51.484 14:27:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:51.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:51.484 --rc genhtml_branch_coverage=1 00:22:51.484 --rc genhtml_function_coverage=1 00:22:51.484 --rc genhtml_legend=1 00:22:51.484 --rc geninfo_all_blocks=1 00:22:51.484 --rc geninfo_unexecuted_blocks=1 00:22:51.484 00:22:51.484 ' 00:22:51.484 14:27:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:51.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:51.484 --rc genhtml_branch_coverage=1 00:22:51.484 --rc genhtml_function_coverage=1 00:22:51.484 --rc genhtml_legend=1 00:22:51.484 --rc geninfo_all_blocks=1 00:22:51.484 --rc geninfo_unexecuted_blocks=1 00:22:51.484 00:22:51.484 ' 00:22:51.485 14:27:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:51.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:51.485 --rc genhtml_branch_coverage=1 00:22:51.485 --rc genhtml_function_coverage=1 00:22:51.485 --rc genhtml_legend=1 00:22:51.485 --rc geninfo_all_blocks=1 00:22:51.485 --rc geninfo_unexecuted_blocks=1 00:22:51.485 00:22:51.485 ' 00:22:51.485 14:27:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:51.485 14:27:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:22:51.485 14:27:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:51.485 14:27:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:51.485 14:27:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:51.485 14:27:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:51.485 14:27:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:51.485 14:27:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:51.485 14:27:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:51.485 14:27:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:51.485 14:27:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:51.485 14:27:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:51.485 14:27:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:22:51.485 14:27:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:22:51.485 14:27:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:51.485 14:27:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:51.485 14:27:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:51.485 14:27:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:51.485 14:27:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:51.485 14:27:19 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:51.485 14:27:19 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:51.485 14:27:19 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:51.485 14:27:19 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:51.485 14:27:19 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.485 14:27:19 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.485 14:27:19 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.485 14:27:19 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:22:51.485 14:27:19 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.485 14:27:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:22:51.485 14:27:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:51.485 14:27:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:51.485 14:27:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:51.485 14:27:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:51.485 14:27:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:51.485 14:27:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:51.485 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:51.485 14:27:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:51.485 14:27:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:51.485 14:27:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:51.485 14:27:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:22:51.485 14:27:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:22:51.485 14:27:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 1 -eq 0 ]] 00:22:51.485 14:27:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:51.485 14:27:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:22:51.485 14:27:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:51.485 14:27:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.485 ************************************ 00:22:51.485 START TEST nvmf_identify 00:22:51.485 ************************************ 00:22:51.485 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:51.744 * Looking for test storage... 00:22:51.744 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:51.744 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:51.744 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lcov --version 00:22:51.744 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:51.744 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:51.744 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:51.744 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:51.744 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:51.744 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:22:51.744 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:22:51.744 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:22:51.744 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:22:51.744 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:22:51.744 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:22:51.744 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:22:51.744 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:51.744 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:22:51.744 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:22:51.744 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:51.744 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:51.744 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:22:51.744 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:22:51.744 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:51.744 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:22:51.744 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:22:51.744 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:22:51.745 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:22:51.745 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:51.745 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:22:51.745 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:22:51.745 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:51.745 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:51.745 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:22:51.745 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:51.745 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:51.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:51.745 --rc genhtml_branch_coverage=1 00:22:51.745 --rc genhtml_function_coverage=1 00:22:51.745 --rc genhtml_legend=1 00:22:51.745 --rc geninfo_all_blocks=1 00:22:51.745 --rc geninfo_unexecuted_blocks=1 00:22:51.745 00:22:51.745 ' 00:22:51.745 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:51.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:51.745 --rc genhtml_branch_coverage=1 00:22:51.745 --rc genhtml_function_coverage=1 00:22:51.745 --rc genhtml_legend=1 00:22:51.745 --rc geninfo_all_blocks=1 00:22:51.745 --rc geninfo_unexecuted_blocks=1 00:22:51.745 00:22:51.745 ' 00:22:51.745 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:51.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:51.745 --rc genhtml_branch_coverage=1 00:22:51.745 --rc genhtml_function_coverage=1 00:22:51.745 --rc genhtml_legend=1 00:22:51.745 --rc geninfo_all_blocks=1 00:22:51.745 --rc geninfo_unexecuted_blocks=1 00:22:51.745 00:22:51.745 ' 00:22:51.745 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:51.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:51.745 --rc genhtml_branch_coverage=1 00:22:51.745 --rc genhtml_function_coverage=1 00:22:51.745 --rc genhtml_legend=1 00:22:51.745 --rc geninfo_all_blocks=1 00:22:51.745 --rc geninfo_unexecuted_blocks=1 00:22:51.745 00:22:51.745 ' 00:22:51.745 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:51.745 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:22:51.745 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:51.745 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:51.745 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:51.745 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:51.745 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:51.745 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:51.745 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:51.745 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:51.745 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:51.745 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:51.745 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:22:51.745 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:22:51.745 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:51.745 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:51.745 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:51.745 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:51.745 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:51.745 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:22:51.745 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:51.745 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:51.745 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:51.745 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.745 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.745 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.745 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:22:51.745 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.745 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:22:51.745 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:51.745 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:51.745 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:51.745 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:51.745 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:51.745 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:51.745 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:51.745 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:51.745 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:51.745 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:52.006 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:52.006 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:52.006 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:22:52.006 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:52.006 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:52.006 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:52.006 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:52.006 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:52.006 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:52.006 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:52.006 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:52.006 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:22:52.006 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:22:52.006 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:22:52.006 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:22:52.006 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:22:52.006 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@460 -- # nvmf_veth_init 00:22:52.006 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:52.006 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:22:52.006 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:22:52.006 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:52.006 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:52.006 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:22:52.006 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:52.006 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:22:52.006 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:52.006 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:22:52.006 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:52.006 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:52.006 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:52.006 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:52.006 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:52.006 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:52.006 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:22:52.006 Cannot find device "nvmf_init_br" 00:22:52.006 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:22:52.006 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:22:52.006 Cannot find device "nvmf_init_br2" 00:22:52.006 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:22:52.006 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:22:52.006 Cannot find device "nvmf_tgt_br" 00:22:52.006 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:22:52.006 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:22:52.006 Cannot find device "nvmf_tgt_br2" 00:22:52.006 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:22:52.006 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:22:52.006 Cannot find device "nvmf_init_br" 00:22:52.006 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:22:52.006 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:22:52.006 Cannot find device "nvmf_init_br2" 00:22:52.006 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:22:52.006 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:22:52.006 Cannot find device "nvmf_tgt_br" 00:22:52.006 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:22:52.006 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:22:52.006 Cannot find device "nvmf_tgt_br2" 00:22:52.006 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:22:52.006 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:22:52.006 Cannot find device "nvmf_br" 00:22:52.006 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:22:52.006 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:22:52.006 Cannot find device "nvmf_init_if" 00:22:52.006 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:22:52.006 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:22:52.006 Cannot find device "nvmf_init_if2" 00:22:52.006 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:22:52.006 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:52.006 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:52.006 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:22:52.006 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:52.006 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:52.006 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:22:52.006 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:22:52.006 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:52.006 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:22:52.006 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:52.266 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:52.266 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:52.266 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:52.266 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:52.266 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:22:52.266 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:22:52.266 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:22:52.266 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:22:52.266 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:22:52.266 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:22:52.266 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:22:52.266 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:22:52.266 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:22:52.266 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:52.266 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:52.266 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:52.266 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:22:52.266 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:22:52.266 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:22:52.266 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:22:52.266 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:52.266 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:52.525 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:52.525 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:22:52.525 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:22:52.525 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:22:52.525 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:52.525 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:52.525 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:22:52.525 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:52.525 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.110 ms 00:22:52.525 00:22:52.525 --- 10.0.0.3 ping statistics --- 00:22:52.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:52.525 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:22:52.525 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:22:52.525 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:52.525 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.087 ms 00:22:52.525 00:22:52.525 --- 10.0.0.4 ping statistics --- 00:22:52.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:52.525 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:22:52.525 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:52.525 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:52.525 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms 00:22:52.525 00:22:52.525 --- 10.0.0.1 ping statistics --- 00:22:52.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:52.525 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:22:52.525 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:22:52.525 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:52.525 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.115 ms 00:22:52.525 00:22:52.525 --- 10.0.0.2 ping statistics --- 00:22:52.526 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:52.526 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:22:52.526 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:52.526 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@461 -- # return 0 00:22:52.526 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:52.526 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:52.526 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:52.526 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:52.526 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:52.526 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:52.526 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:52.526 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:22:52.526 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:52.526 14:27:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:52.526 14:27:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=80274 00:22:52.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:52.526 14:27:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:52.526 14:27:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:52.526 14:27:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 80274 00:22:52.526 14:27:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@833 -- # '[' -z 80274 ']' 00:22:52.526 14:27:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:52.526 14:27:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:52.526 14:27:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:52.526 14:27:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:52.526 14:27:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:52.526 [2024-11-06 14:27:20.117620] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:22:52.526 [2024-11-06 14:27:20.118052] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:52.785 [2024-11-06 14:27:20.305829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:53.044 [2024-11-06 14:27:20.463919] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:53.044 [2024-11-06 14:27:20.464126] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:53.044 [2024-11-06 14:27:20.464575] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:53.045 [2024-11-06 14:27:20.464928] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:53.045 [2024-11-06 14:27:20.465164] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:53.045 [2024-11-06 14:27:20.468153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:53.045 [2024-11-06 14:27:20.468281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:53.045 [2024-11-06 14:27:20.468294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:53.045 [2024-11-06 14:27:20.468300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:53.304 [2024-11-06 14:27:20.729402] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:53.304 14:27:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:53.304 14:27:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@866 -- # return 0 00:22:53.304 14:27:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:53.304 14:27:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.304 14:27:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:53.304 [2024-11-06 14:27:20.931770] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:53.575 14:27:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.575 14:27:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:22:53.575 14:27:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:53.575 14:27:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:53.575 14:27:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:53.575 14:27:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.575 14:27:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:53.575 Malloc0 00:22:53.575 14:27:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.575 14:27:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:53.575 14:27:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.575 14:27:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:53.575 14:27:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.575 14:27:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:22:53.575 14:27:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.575 14:27:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:53.575 14:27:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.575 14:27:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:53.575 14:27:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.575 14:27:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:53.575 [2024-11-06 14:27:21.119630] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:53.575 14:27:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.575 14:27:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:22:53.575 14:27:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.575 14:27:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:53.575 14:27:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.575 14:27:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:22:53.575 14:27:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.575 14:27:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:53.575 [ 00:22:53.575 { 00:22:53.575 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:53.575 "subtype": "Discovery", 00:22:53.575 "listen_addresses": [ 00:22:53.575 { 00:22:53.575 "trtype": "TCP", 00:22:53.575 "adrfam": "IPv4", 00:22:53.575 "traddr": "10.0.0.3", 00:22:53.575 "trsvcid": "4420" 00:22:53.575 } 00:22:53.575 ], 00:22:53.575 "allow_any_host": true, 00:22:53.575 "hosts": [] 00:22:53.575 }, 00:22:53.575 { 00:22:53.575 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:53.575 "subtype": "NVMe", 00:22:53.575 "listen_addresses": [ 00:22:53.575 { 00:22:53.575 "trtype": "TCP", 00:22:53.575 "adrfam": "IPv4", 00:22:53.575 "traddr": "10.0.0.3", 00:22:53.575 "trsvcid": "4420" 00:22:53.575 } 00:22:53.575 ], 00:22:53.575 "allow_any_host": true, 00:22:53.575 "hosts": [], 00:22:53.575 "serial_number": "SPDK00000000000001", 00:22:53.575 "model_number": "SPDK bdev Controller", 00:22:53.575 "max_namespaces": 32, 00:22:53.575 "min_cntlid": 1, 00:22:53.575 "max_cntlid": 65519, 00:22:53.575 "namespaces": [ 00:22:53.575 { 00:22:53.575 "nsid": 1, 00:22:53.575 "bdev_name": "Malloc0", 00:22:53.575 "name": "Malloc0", 00:22:53.575 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:22:53.575 "eui64": "ABCDEF0123456789", 00:22:53.575 "uuid": "43dc943e-640d-434a-b7a5-73e1f05c7344" 00:22:53.575 } 00:22:53.575 ] 00:22:53.575 } 00:22:53.575 ] 00:22:53.575 14:27:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.575 14:27:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:22:53.855 [2024-11-06 14:27:21.227816] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:22:53.855 [2024-11-06 14:27:21.228076] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80309 ] 00:22:53.855 [2024-11-06 14:27:21.402457] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:22:53.855 [2024-11-06 14:27:21.402593] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:53.855 [2024-11-06 14:27:21.402603] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:53.855 [2024-11-06 14:27:21.402638] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:53.855 [2024-11-06 14:27:21.402656] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:53.856 [2024-11-06 14:27:21.403091] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:22:53.856 [2024-11-06 14:27:21.403162] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x61500000f080 0 00:22:53.856 [2024-11-06 14:27:21.407883] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:53.856 [2024-11-06 14:27:21.407922] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:53.856 [2024-11-06 14:27:21.407933] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:53.856 [2024-11-06 14:27:21.407943] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:53.856 [2024-11-06 14:27:21.408036] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.856 [2024-11-06 14:27:21.408051] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.856 [2024-11-06 14:27:21.408059] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:22:53.856 [2024-11-06 14:27:21.408090] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:53.856 [2024-11-06 14:27:21.408130] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:53.856 [2024-11-06 14:27:21.415877] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.856 [2024-11-06 14:27:21.415907] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.856 [2024-11-06 14:27:21.415915] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.856 [2024-11-06 14:27:21.415925] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:22:53.856 [2024-11-06 14:27:21.415945] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:53.856 [2024-11-06 14:27:21.415960] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:22:53.856 [2024-11-06 14:27:21.415976] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:22:53.856 [2024-11-06 14:27:21.416001] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.856 [2024-11-06 14:27:21.416010] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.856 [2024-11-06 14:27:21.416017] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:22:53.856 [2024-11-06 14:27:21.416033] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.856 [2024-11-06 14:27:21.416065] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:53.856 [2024-11-06 14:27:21.416159] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.856 [2024-11-06 14:27:21.416172] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.856 [2024-11-06 14:27:21.416178] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.856 [2024-11-06 14:27:21.416186] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:22:53.856 [2024-11-06 14:27:21.416196] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:22:53.856 [2024-11-06 14:27:21.416209] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:22:53.856 [2024-11-06 14:27:21.416219] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.856 [2024-11-06 14:27:21.416226] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.856 [2024-11-06 14:27:21.416236] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:22:53.856 [2024-11-06 14:27:21.416251] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.856 [2024-11-06 14:27:21.416275] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:53.856 [2024-11-06 14:27:21.416343] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.856 [2024-11-06 14:27:21.416351] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.856 [2024-11-06 14:27:21.416360] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.856 [2024-11-06 14:27:21.416366] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:22:53.856 [2024-11-06 14:27:21.416375] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:22:53.856 [2024-11-06 14:27:21.416387] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:22:53.856 [2024-11-06 14:27:21.416398] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.856 [2024-11-06 14:27:21.416405] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.856 [2024-11-06 14:27:21.416412] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:22:53.856 [2024-11-06 14:27:21.416423] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.856 [2024-11-06 14:27:21.416443] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:53.856 [2024-11-06 14:27:21.416510] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.856 [2024-11-06 14:27:21.416518] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.856 [2024-11-06 14:27:21.416524] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.856 [2024-11-06 14:27:21.416530] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:22:53.856 [2024-11-06 14:27:21.416538] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:53.856 [2024-11-06 14:27:21.416551] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.856 [2024-11-06 14:27:21.416562] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.856 [2024-11-06 14:27:21.416569] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:22:53.856 [2024-11-06 14:27:21.416583] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.856 [2024-11-06 14:27:21.416604] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:53.856 [2024-11-06 14:27:21.416660] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.856 [2024-11-06 14:27:21.416669] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.856 [2024-11-06 14:27:21.416675] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.856 [2024-11-06 14:27:21.416681] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:22:53.856 [2024-11-06 14:27:21.416689] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:22:53.856 [2024-11-06 14:27:21.416701] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:22:53.856 [2024-11-06 14:27:21.416713] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:53.856 [2024-11-06 14:27:21.416823] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:22:53.856 [2024-11-06 14:27:21.416831] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:53.856 [2024-11-06 14:27:21.416860] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.856 [2024-11-06 14:27:21.416868] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.856 [2024-11-06 14:27:21.416878] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:22:53.856 [2024-11-06 14:27:21.416890] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.856 [2024-11-06 14:27:21.416912] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:53.856 [2024-11-06 14:27:21.416982] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.856 [2024-11-06 14:27:21.416993] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.856 [2024-11-06 14:27:21.416998] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.856 [2024-11-06 14:27:21.417004] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:22:53.856 [2024-11-06 14:27:21.417013] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:53.856 [2024-11-06 14:27:21.417026] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.857 [2024-11-06 14:27:21.417033] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.857 [2024-11-06 14:27:21.417040] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:22:53.857 [2024-11-06 14:27:21.417051] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.857 [2024-11-06 14:27:21.417079] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:53.857 [2024-11-06 14:27:21.417127] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.857 [2024-11-06 14:27:21.417135] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.857 [2024-11-06 14:27:21.417141] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.857 [2024-11-06 14:27:21.417147] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:22:53.857 [2024-11-06 14:27:21.417155] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:53.857 [2024-11-06 14:27:21.417175] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:22:53.857 [2024-11-06 14:27:21.417194] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:22:53.857 [2024-11-06 14:27:21.417209] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:22:53.857 [2024-11-06 14:27:21.417227] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.857 [2024-11-06 14:27:21.417234] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:22:53.857 [2024-11-06 14:27:21.417246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.857 [2024-11-06 14:27:21.417268] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:53.857 [2024-11-06 14:27:21.417395] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:53.857 [2024-11-06 14:27:21.417407] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:53.857 [2024-11-06 14:27:21.417413] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:53.857 [2024-11-06 14:27:21.417420] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=0 00:22:53.857 [2024-11-06 14:27:21.417429] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:22:53.857 [2024-11-06 14:27:21.417437] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.857 [2024-11-06 14:27:21.417455] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:53.857 [2024-11-06 14:27:21.417462] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:53.857 [2024-11-06 14:27:21.417475] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.857 [2024-11-06 14:27:21.417483] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.857 [2024-11-06 14:27:21.417488] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.857 [2024-11-06 14:27:21.417494] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:22:53.857 [2024-11-06 14:27:21.417509] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:22:53.857 [2024-11-06 14:27:21.417518] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:22:53.857 [2024-11-06 14:27:21.417527] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:22:53.857 [2024-11-06 14:27:21.417536] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:22:53.857 [2024-11-06 14:27:21.417544] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:22:53.857 [2024-11-06 14:27:21.417552] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:22:53.857 [2024-11-06 14:27:21.417568] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:22:53.857 [2024-11-06 14:27:21.417586] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.857 [2024-11-06 14:27:21.417594] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.857 [2024-11-06 14:27:21.417601] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:22:53.857 [2024-11-06 14:27:21.417613] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:53.857 [2024-11-06 14:27:21.417637] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:53.857 [2024-11-06 14:27:21.417712] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.857 [2024-11-06 14:27:21.417720] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.857 [2024-11-06 14:27:21.417726] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.857 [2024-11-06 14:27:21.417732] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:22:53.857 [2024-11-06 14:27:21.417748] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.857 [2024-11-06 14:27:21.417755] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.857 [2024-11-06 14:27:21.417762] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:22:53.857 [2024-11-06 14:27:21.417781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.857 [2024-11-06 14:27:21.417790] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.857 [2024-11-06 14:27:21.417796] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.857 [2024-11-06 14:27:21.417802] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x61500000f080) 00:22:53.857 [2024-11-06 14:27:21.417814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.857 [2024-11-06 14:27:21.417823] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.857 [2024-11-06 14:27:21.417828] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.857 [2024-11-06 14:27:21.417834] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x61500000f080) 00:22:53.857 [2024-11-06 14:27:21.417856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.857 [2024-11-06 14:27:21.417865] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.857 [2024-11-06 14:27:21.417871] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.857 [2024-11-06 14:27:21.417877] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:53.857 [2024-11-06 14:27:21.417886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.857 [2024-11-06 14:27:21.417893] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:53.857 [2024-11-06 14:27:21.417914] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:53.857 [2024-11-06 14:27:21.417927] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.857 [2024-11-06 14:27:21.417933] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:22:53.857 [2024-11-06 14:27:21.417944] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.857 [2024-11-06 14:27:21.417970] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:53.857 [2024-11-06 14:27:21.417979] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:22:53.857 [2024-11-06 14:27:21.417985] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:22:53.857 [2024-11-06 14:27:21.417992] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:53.857 [2024-11-06 14:27:21.417999] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:22:53.857 [2024-11-06 14:27:21.418117] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.857 [2024-11-06 14:27:21.418126] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.857 [2024-11-06 14:27:21.418132] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.857 [2024-11-06 14:27:21.418138] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:22:53.857 [2024-11-06 14:27:21.418146] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:22:53.857 [2024-11-06 14:27:21.418156] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:22:53.857 [2024-11-06 14:27:21.418178] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.857 [2024-11-06 14:27:21.418185] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:22:53.857 [2024-11-06 14:27:21.418196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.857 [2024-11-06 14:27:21.418220] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:22:53.857 [2024-11-06 14:27:21.418308] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:53.857 [2024-11-06 14:27:21.418317] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:53.857 [2024-11-06 14:27:21.418323] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:53.857 [2024-11-06 14:27:21.418330] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=4 00:22:53.857 [2024-11-06 14:27:21.418338] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:22:53.857 [2024-11-06 14:27:21.418345] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.857 [2024-11-06 14:27:21.418355] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:53.857 [2024-11-06 14:27:21.418362] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:53.857 [2024-11-06 14:27:21.418374] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.857 [2024-11-06 14:27:21.418390] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.857 [2024-11-06 14:27:21.418395] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.857 [2024-11-06 14:27:21.418405] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:22:53.857 [2024-11-06 14:27:21.418431] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:22:53.858 [2024-11-06 14:27:21.418502] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.858 [2024-11-06 14:27:21.418512] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:22:53.858 [2024-11-06 14:27:21.418529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.858 [2024-11-06 14:27:21.418539] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.858 [2024-11-06 14:27:21.418546] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.858 [2024-11-06 14:27:21.418553] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:22:53.858 [2024-11-06 14:27:21.418566] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.858 [2024-11-06 14:27:21.418595] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:22:53.858 [2024-11-06 14:27:21.418607] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:22:53.858 [2024-11-06 14:27:21.418826] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:53.858 [2024-11-06 14:27:21.418856] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:53.858 [2024-11-06 14:27:21.418863] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:53.858 [2024-11-06 14:27:21.418870] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=1024, cccid=4 00:22:53.858 [2024-11-06 14:27:21.418882] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=1024 00:22:53.858 [2024-11-06 14:27:21.418890] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.858 [2024-11-06 14:27:21.418899] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:53.858 [2024-11-06 14:27:21.418906] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:53.858 [2024-11-06 14:27:21.418915] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.858 [2024-11-06 14:27:21.418922] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.858 [2024-11-06 14:27:21.418928] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.858 [2024-11-06 14:27:21.418935] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:22:53.858 [2024-11-06 14:27:21.418957] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.858 [2024-11-06 14:27:21.418966] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.858 [2024-11-06 14:27:21.418978] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.858 [2024-11-06 14:27:21.418984] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:22:53.858 [2024-11-06 14:27:21.419012] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.858 [2024-11-06 14:27:21.419024] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:22:53.858 [2024-11-06 14:27:21.419036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.858 [2024-11-06 14:27:21.419064] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:22:53.858 [2024-11-06 14:27:21.419164] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:53.858 [2024-11-06 14:27:21.419172] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:53.858 [2024-11-06 14:27:21.419177] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:53.858 [2024-11-06 14:27:21.419183] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=3072, cccid=4 00:22:53.858 [2024-11-06 14:27:21.419190] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=3072 00:22:53.858 [2024-11-06 14:27:21.419200] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.858 [2024-11-06 14:27:21.419209] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:53.858 [2024-11-06 14:27:21.419215] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:53.858 [2024-11-06 14:27:21.419227] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.858 [2024-11-06 14:27:21.419234] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.858 [2024-11-06 14:27:21.419240] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.858 [2024-11-06 14:27:21.419246] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:22:53.858 [2024-11-06 14:27:21.419263] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.858 [2024-11-06 14:27:21.419271] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:22:53.858 [2024-11-06 14:27:21.419282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.858 [2024-11-06 14:27:21.419313] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:22:53.858 [2024-11-06 14:27:21.419415] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:53.858 [2024-11-06 14:27:21.419423] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:53.858 [2024-11-06 14:27:21.419429] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:53.858 [2024-11-06 14:27:21.419435] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=8, cccid=4 00:22:53.858 [2024-11-06 14:27:21.419442] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=8 00:22:53.858 [2024-11-06 14:27:21.419448] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.858 [2024-11-06 14:27:21.419457] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:53.858 [2024-11-06 14:27:21.419463] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:53.858 [2024-11-06 14:27:21.419483] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.858 [2024-11-06 14:27:21.419494] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.858 [2024-11-06 14:27:21.419500] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.858 ===================================================== 00:22:53.858 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:53.858 ===================================================== 00:22:53.858 Controller Capabilities/Features 00:22:53.858 ================================ 00:22:53.858 Vendor ID: 0000 00:22:53.858 Subsystem Vendor ID: 0000 00:22:53.858 Serial Number: .................... 00:22:53.858 Model Number: ........................................ 00:22:53.858 Firmware Version: 25.01 00:22:53.858 Recommended Arb Burst: 0 00:22:53.858 IEEE OUI Identifier: 00 00 00 00:22:53.858 Multi-path I/O 00:22:53.858 May have multiple subsystem ports: No 00:22:53.858 May have multiple controllers: No 00:22:53.858 Associated with SR-IOV VF: No 00:22:53.858 Max Data Transfer Size: 131072 00:22:53.858 Max Number of Namespaces: 0 00:22:53.858 Max Number of I/O Queues: 1024 00:22:53.858 NVMe Specification Version (VS): 1.3 00:22:53.858 NVMe Specification Version (Identify): 1.3 00:22:53.858 Maximum Queue Entries: 128 00:22:53.858 Contiguous Queues Required: Yes 00:22:53.858 Arbitration Mechanisms Supported 00:22:53.858 Weighted Round Robin: Not Supported 00:22:53.858 Vendor Specific: Not Supported 00:22:53.858 Reset Timeout: 15000 ms 00:22:53.858 Doorbell Stride: 4 bytes 00:22:53.858 NVM Subsystem Reset: Not Supported 00:22:53.858 Command Sets Supported 00:22:53.858 NVM Command Set: Supported 00:22:53.858 Boot Partition: Not Supported 00:22:53.858 Memory Page Size Minimum: 4096 bytes 00:22:53.858 Memory Page Size Maximum: 4096 bytes 00:22:53.858 Persistent Memory Region: Not Supported 00:22:53.858 Optional Asynchronous Events Supported 00:22:53.858 Namespace Attribute Notices: Not Supported 00:22:53.858 Firmware Activation Notices: Not Supported 00:22:53.858 ANA Change Notices: Not Supported 00:22:53.858 PLE Aggregate Log Change Notices: Not Supported 00:22:53.858 LBA Status Info Alert Notices: Not Supported 00:22:53.858 EGE Aggregate Log Change Notices: Not Supported 00:22:53.858 Normal NVM Subsystem Shutdown event: Not Supported 00:22:53.858 Zone Descriptor Change Notices: Not Supported 00:22:53.858 Discovery Log Change Notices: Supported 00:22:53.858 Controller Attributes 00:22:53.858 128-bit Host Identifier: Not Supported 00:22:53.858 Non-Operational Permissive Mode: Not Supported 00:22:53.858 NVM Sets: Not Supported 00:22:53.859 Read Recovery Levels: Not Supported 00:22:53.859 Endurance Groups: Not Supported 00:22:53.859 Predictable Latency Mode: Not Supported 00:22:53.859 Traffic Based Keep ALive: Not Supported 00:22:53.859 Namespace Granularity: Not Supported 00:22:53.859 SQ Associations: Not Supported 00:22:53.859 UUID List: Not Supported 00:22:53.859 Multi-Domain Subsystem: Not Supported 00:22:53.859 Fixed Capacity Management: Not Supported 00:22:53.859 Variable Capacity Management: Not Supported 00:22:53.859 Delete Endurance Group: Not Supported 00:22:53.859 Delete NVM Set: Not Supported 00:22:53.859 Extended LBA Formats Supported: Not Supported 00:22:53.859 Flexible Data Placement Supported: Not Supported 00:22:53.859 00:22:53.859 Controller Memory Buffer Support 00:22:53.859 ================================ 00:22:53.859 Supported: No 00:22:53.859 00:22:53.859 Persistent Memory Region Support 00:22:53.859 ================================ 00:22:53.859 Supported: No 00:22:53.859 00:22:53.859 Admin Command Set Attributes 00:22:53.859 ============================ 00:22:53.859 Security Send/Receive: Not Supported 00:22:53.859 Format NVM: Not Supported 00:22:53.859 Firmware Activate/Download: Not Supported 00:22:53.859 Namespace Management: Not Supported 00:22:53.859 Device Self-Test: Not Supported 00:22:53.859 Directives: Not Supported 00:22:53.859 NVMe-MI: Not Supported 00:22:53.859 Virtualization Management: Not Supported 00:22:53.859 Doorbell Buffer Config: Not Supported 00:22:53.859 Get LBA Status Capability: Not Supported 00:22:53.859 Command & Feature Lockdown Capability: Not Supported 00:22:53.859 Abort Command Limit: 1 00:22:53.859 Async Event Request Limit: 4 00:22:53.859 Number of Firmware Slots: N/A 00:22:53.859 Firmware Slot 1 Read-Only: N/A 00:22:53.859 Firm[2024-11-06 14:27:21.419506] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:22:53.859 ware Activation Without Reset: N/A 00:22:53.859 Multiple Update Detection Support: N/A 00:22:53.859 Firmware Update Granularity: No Information Provided 00:22:53.859 Per-Namespace SMART Log: No 00:22:53.859 Asymmetric Namespace Access Log Page: Not Supported 00:22:53.859 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:53.859 Command Effects Log Page: Not Supported 00:22:53.859 Get Log Page Extended Data: Supported 00:22:53.859 Telemetry Log Pages: Not Supported 00:22:53.859 Persistent Event Log Pages: Not Supported 00:22:53.859 Supported Log Pages Log Page: May Support 00:22:53.859 Commands Supported & Effects Log Page: Not Supported 00:22:53.859 Feature Identifiers & Effects Log Page:May Support 00:22:53.859 NVMe-MI Commands & Effects Log Page: May Support 00:22:53.859 Data Area 4 for Telemetry Log: Not Supported 00:22:53.859 Error Log Page Entries Supported: 128 00:22:53.859 Keep Alive: Not Supported 00:22:53.859 00:22:53.859 NVM Command Set Attributes 00:22:53.859 ========================== 00:22:53.859 Submission Queue Entry Size 00:22:53.859 Max: 1 00:22:53.859 Min: 1 00:22:53.859 Completion Queue Entry Size 00:22:53.859 Max: 1 00:22:53.859 Min: 1 00:22:53.859 Number of Namespaces: 0 00:22:53.859 Compare Command: Not Supported 00:22:53.859 Write Uncorrectable Command: Not Supported 00:22:53.859 Dataset Management Command: Not Supported 00:22:53.859 Write Zeroes Command: Not Supported 00:22:53.859 Set Features Save Field: Not Supported 00:22:53.859 Reservations: Not Supported 00:22:53.859 Timestamp: Not Supported 00:22:53.859 Copy: Not Supported 00:22:53.859 Volatile Write Cache: Not Present 00:22:53.859 Atomic Write Unit (Normal): 1 00:22:53.859 Atomic Write Unit (PFail): 1 00:22:53.859 Atomic Compare & Write Unit: 1 00:22:53.859 Fused Compare & Write: Supported 00:22:53.859 Scatter-Gather List 00:22:53.859 SGL Command Set: Supported 00:22:53.859 SGL Keyed: Supported 00:22:53.859 SGL Bit Bucket Descriptor: Not Supported 00:22:53.859 SGL Metadata Pointer: Not Supported 00:22:53.859 Oversized SGL: Not Supported 00:22:53.859 SGL Metadata Address: Not Supported 00:22:53.859 SGL Offset: Supported 00:22:53.859 Transport SGL Data Block: Not Supported 00:22:53.859 Replay Protected Memory Block: Not Supported 00:22:53.859 00:22:53.859 Firmware Slot Information 00:22:53.859 ========================= 00:22:53.859 Active slot: 0 00:22:53.859 00:22:53.859 00:22:53.859 Error Log 00:22:53.859 ========= 00:22:53.859 00:22:53.859 Active Namespaces 00:22:53.859 ================= 00:22:53.859 Discovery Log Page 00:22:53.859 ================== 00:22:53.859 Generation Counter: 2 00:22:53.859 Number of Records: 2 00:22:53.859 Record Format: 0 00:22:53.859 00:22:53.859 Discovery Log Entry 0 00:22:53.859 ---------------------- 00:22:53.859 Transport Type: 3 (TCP) 00:22:53.859 Address Family: 1 (IPv4) 00:22:53.859 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:53.859 Entry Flags: 00:22:53.859 Duplicate Returned Information: 1 00:22:53.859 Explicit Persistent Connection Support for Discovery: 1 00:22:53.859 Transport Requirements: 00:22:53.859 Secure Channel: Not Required 00:22:53.859 Port ID: 0 (0x0000) 00:22:53.859 Controller ID: 65535 (0xffff) 00:22:53.859 Admin Max SQ Size: 128 00:22:53.859 Transport Service Identifier: 4420 00:22:53.859 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:53.859 Transport Address: 10.0.0.3 00:22:53.859 Discovery Log Entry 1 00:22:53.859 ---------------------- 00:22:53.859 Transport Type: 3 (TCP) 00:22:53.859 Address Family: 1 (IPv4) 00:22:53.859 Subsystem Type: 2 (NVM Subsystem) 00:22:53.859 Entry Flags: 00:22:53.859 Duplicate Returned Information: 0 00:22:53.859 Explicit Persistent Connection Support for Discovery: 0 00:22:53.859 Transport Requirements: 00:22:53.859 Secure Channel: Not Required 00:22:53.859 Port ID: 0 (0x0000) 00:22:53.859 Controller ID: 65535 (0xffff) 00:22:53.859 Admin Max SQ Size: 128 00:22:53.859 Transport Service Identifier: 4420 00:22:53.859 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:22:53.859 Transport Address: 10.0.0.3 [2024-11-06 14:27:21.419660] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:22:53.859 [2024-11-06 14:27:21.419679] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:22:53.859 [2024-11-06 14:27:21.419690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.859 [2024-11-06 14:27:21.419699] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x61500000f080 00:22:53.859 [2024-11-06 14:27:21.419708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.859 [2024-11-06 14:27:21.419715] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x61500000f080 00:22:53.859 [2024-11-06 14:27:21.419723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.859 [2024-11-06 14:27:21.419730] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:53.859 [2024-11-06 14:27:21.419738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.859 [2024-11-06 14:27:21.419759] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.859 [2024-11-06 14:27:21.419770] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.859 [2024-11-06 14:27:21.419777] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:53.859 [2024-11-06 14:27:21.419789] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.859 [2024-11-06 14:27:21.419814] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:53.860 [2024-11-06 14:27:21.423864] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.860 [2024-11-06 14:27:21.423889] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.860 [2024-11-06 14:27:21.423896] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.860 [2024-11-06 14:27:21.423905] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:53.860 [2024-11-06 14:27:21.423920] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.860 [2024-11-06 14:27:21.423936] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.860 [2024-11-06 14:27:21.423943] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:53.860 [2024-11-06 14:27:21.423957] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.860 [2024-11-06 14:27:21.423988] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:53.860 [2024-11-06 14:27:21.424100] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.860 [2024-11-06 14:27:21.424109] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.860 [2024-11-06 14:27:21.424115] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.860 [2024-11-06 14:27:21.424120] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:53.860 [2024-11-06 14:27:21.424129] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:22:53.860 [2024-11-06 14:27:21.424137] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:22:53.860 [2024-11-06 14:27:21.424154] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.860 [2024-11-06 14:27:21.424161] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.860 [2024-11-06 14:27:21.424171] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:53.860 [2024-11-06 14:27:21.424194] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.860 [2024-11-06 14:27:21.424218] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:53.860 [2024-11-06 14:27:21.424285] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.860 [2024-11-06 14:27:21.424293] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.860 [2024-11-06 14:27:21.424299] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.860 [2024-11-06 14:27:21.424305] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:53.860 [2024-11-06 14:27:21.424322] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.860 [2024-11-06 14:27:21.424329] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.860 [2024-11-06 14:27:21.424334] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:53.860 [2024-11-06 14:27:21.424344] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.860 [2024-11-06 14:27:21.424363] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:53.860 [2024-11-06 14:27:21.424433] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.860 [2024-11-06 14:27:21.424441] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.860 [2024-11-06 14:27:21.424447] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.860 [2024-11-06 14:27:21.424453] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:53.860 [2024-11-06 14:27:21.424466] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.860 [2024-11-06 14:27:21.424472] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.860 [2024-11-06 14:27:21.424477] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:53.860 [2024-11-06 14:27:21.424487] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.860 [2024-11-06 14:27:21.424506] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:53.860 [2024-11-06 14:27:21.424571] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.860 [2024-11-06 14:27:21.424580] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.860 [2024-11-06 14:27:21.424585] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.860 [2024-11-06 14:27:21.424591] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:53.860 [2024-11-06 14:27:21.424604] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.860 [2024-11-06 14:27:21.424610] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.860 [2024-11-06 14:27:21.424616] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:53.860 [2024-11-06 14:27:21.424625] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.860 [2024-11-06 14:27:21.424647] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:53.860 [2024-11-06 14:27:21.424703] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.860 [2024-11-06 14:27:21.424711] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.860 [2024-11-06 14:27:21.424717] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.860 [2024-11-06 14:27:21.424722] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:53.860 [2024-11-06 14:27:21.424735] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.860 [2024-11-06 14:27:21.424744] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.860 [2024-11-06 14:27:21.424750] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:53.860 [2024-11-06 14:27:21.424759] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.860 [2024-11-06 14:27:21.424778] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:53.860 [2024-11-06 14:27:21.424832] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.860 [2024-11-06 14:27:21.424853] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.860 [2024-11-06 14:27:21.424859] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.860 [2024-11-06 14:27:21.424868] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:53.860 [2024-11-06 14:27:21.424881] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.860 [2024-11-06 14:27:21.424887] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.860 [2024-11-06 14:27:21.424892] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:53.860 [2024-11-06 14:27:21.424902] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.860 [2024-11-06 14:27:21.424922] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:53.860 [2024-11-06 14:27:21.424990] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.860 [2024-11-06 14:27:21.424999] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.860 [2024-11-06 14:27:21.425004] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.860 [2024-11-06 14:27:21.425010] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:53.860 [2024-11-06 14:27:21.425022] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.860 [2024-11-06 14:27:21.425028] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.860 [2024-11-06 14:27:21.425034] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:53.860 [2024-11-06 14:27:21.425043] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.860 [2024-11-06 14:27:21.425062] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:53.860 [2024-11-06 14:27:21.425112] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.860 [2024-11-06 14:27:21.425120] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.860 [2024-11-06 14:27:21.425126] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.860 [2024-11-06 14:27:21.425132] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:53.860 [2024-11-06 14:27:21.425144] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.860 [2024-11-06 14:27:21.425150] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.860 [2024-11-06 14:27:21.425155] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:53.860 [2024-11-06 14:27:21.425165] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.860 [2024-11-06 14:27:21.425183] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:53.860 [2024-11-06 14:27:21.425238] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.860 [2024-11-06 14:27:21.425246] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.860 [2024-11-06 14:27:21.425251] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.860 [2024-11-06 14:27:21.425257] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:53.860 [2024-11-06 14:27:21.425270] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.860 [2024-11-06 14:27:21.425276] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.860 [2024-11-06 14:27:21.425284] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:53.860 [2024-11-06 14:27:21.425297] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.860 [2024-11-06 14:27:21.425316] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:53.860 [2024-11-06 14:27:21.425365] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.860 [2024-11-06 14:27:21.425373] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.860 [2024-11-06 14:27:21.425378] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.860 [2024-11-06 14:27:21.425384] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:53.861 [2024-11-06 14:27:21.425400] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.861 [2024-11-06 14:27:21.425406] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.861 [2024-11-06 14:27:21.425411] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:53.861 [2024-11-06 14:27:21.425421] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.861 [2024-11-06 14:27:21.425439] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:53.861 [2024-11-06 14:27:21.425503] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.861 [2024-11-06 14:27:21.425515] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.861 [2024-11-06 14:27:21.425520] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.861 [2024-11-06 14:27:21.425526] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:53.861 [2024-11-06 14:27:21.425539] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.861 [2024-11-06 14:27:21.425545] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.861 [2024-11-06 14:27:21.425551] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:53.861 [2024-11-06 14:27:21.425561] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.861 [2024-11-06 14:27:21.425581] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:53.861 [2024-11-06 14:27:21.425639] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.861 [2024-11-06 14:27:21.425648] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.861 [2024-11-06 14:27:21.425653] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.861 [2024-11-06 14:27:21.425659] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:53.861 [2024-11-06 14:27:21.425671] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.861 [2024-11-06 14:27:21.425677] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.861 [2024-11-06 14:27:21.425682] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:53.861 [2024-11-06 14:27:21.425692] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.861 [2024-11-06 14:27:21.425711] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:53.861 [2024-11-06 14:27:21.425778] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.861 [2024-11-06 14:27:21.425786] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.861 [2024-11-06 14:27:21.425792] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.861 [2024-11-06 14:27:21.425798] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:53.861 [2024-11-06 14:27:21.425810] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.861 [2024-11-06 14:27:21.425816] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.861 [2024-11-06 14:27:21.425827] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:53.861 [2024-11-06 14:27:21.425850] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.861 [2024-11-06 14:27:21.425870] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:53.861 [2024-11-06 14:27:21.425950] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.861 [2024-11-06 14:27:21.425962] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.861 [2024-11-06 14:27:21.425968] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.861 [2024-11-06 14:27:21.425974] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:53.861 [2024-11-06 14:27:21.425987] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.861 [2024-11-06 14:27:21.425993] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.861 [2024-11-06 14:27:21.425998] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:53.861 [2024-11-06 14:27:21.426008] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.861 [2024-11-06 14:27:21.426026] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:53.861 [2024-11-06 14:27:21.426090] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.861 [2024-11-06 14:27:21.426099] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.861 [2024-11-06 14:27:21.426104] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.861 [2024-11-06 14:27:21.426110] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:53.861 [2024-11-06 14:27:21.426123] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.861 [2024-11-06 14:27:21.426129] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.861 [2024-11-06 14:27:21.426134] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:53.861 [2024-11-06 14:27:21.426144] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.861 [2024-11-06 14:27:21.426162] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:53.861 [2024-11-06 14:27:21.426229] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.861 [2024-11-06 14:27:21.426238] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.861 [2024-11-06 14:27:21.426243] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.861 [2024-11-06 14:27:21.426249] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:53.861 [2024-11-06 14:27:21.426262] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.861 [2024-11-06 14:27:21.426271] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.861 [2024-11-06 14:27:21.426277] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:53.861 [2024-11-06 14:27:21.426286] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.861 [2024-11-06 14:27:21.426304] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:53.861 [2024-11-06 14:27:21.426360] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.861 [2024-11-06 14:27:21.426368] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.861 [2024-11-06 14:27:21.426377] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.861 [2024-11-06 14:27:21.426383] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:53.861 [2024-11-06 14:27:21.426395] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.861 [2024-11-06 14:27:21.426401] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.861 [2024-11-06 14:27:21.426407] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:53.861 [2024-11-06 14:27:21.426421] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.861 [2024-11-06 14:27:21.426443] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:53.861 [2024-11-06 14:27:21.426567] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.861 [2024-11-06 14:27:21.426576] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.862 [2024-11-06 14:27:21.426582] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.862 [2024-11-06 14:27:21.426588] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:53.862 [2024-11-06 14:27:21.426603] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.862 [2024-11-06 14:27:21.426609] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.862 [2024-11-06 14:27:21.426615] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:53.862 [2024-11-06 14:27:21.426625] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.862 [2024-11-06 14:27:21.426647] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:53.862 [2024-11-06 14:27:21.426711] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.862 [2024-11-06 14:27:21.426724] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.862 [2024-11-06 14:27:21.426729] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.862 [2024-11-06 14:27:21.426735] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:53.862 [2024-11-06 14:27:21.426747] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.862 [2024-11-06 14:27:21.426754] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.862 [2024-11-06 14:27:21.426759] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:53.862 [2024-11-06 14:27:21.426769] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.862 [2024-11-06 14:27:21.426787] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:53.862 [2024-11-06 14:27:21.426862] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.862 [2024-11-06 14:27:21.426871] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.862 [2024-11-06 14:27:21.426877] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.862 [2024-11-06 14:27:21.426883] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:53.862 [2024-11-06 14:27:21.426899] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.862 [2024-11-06 14:27:21.426905] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.862 [2024-11-06 14:27:21.426911] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:53.862 [2024-11-06 14:27:21.426921] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.862 [2024-11-06 14:27:21.426977] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:53.862 [2024-11-06 14:27:21.427043] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.862 [2024-11-06 14:27:21.427052] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.862 [2024-11-06 14:27:21.427057] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.862 [2024-11-06 14:27:21.427063] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:53.862 [2024-11-06 14:27:21.427075] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.862 [2024-11-06 14:27:21.427084] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.862 [2024-11-06 14:27:21.427090] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:53.862 [2024-11-06 14:27:21.427100] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.862 [2024-11-06 14:27:21.427118] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:53.862 [2024-11-06 14:27:21.427182] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.862 [2024-11-06 14:27:21.427191] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.862 [2024-11-06 14:27:21.427199] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.862 [2024-11-06 14:27:21.427205] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:53.862 [2024-11-06 14:27:21.427218] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.862 [2024-11-06 14:27:21.427224] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.862 [2024-11-06 14:27:21.427230] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:53.862 [2024-11-06 14:27:21.427239] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.862 [2024-11-06 14:27:21.427257] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:53.862 [2024-11-06 14:27:21.427319] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.862 [2024-11-06 14:27:21.427328] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.862 [2024-11-06 14:27:21.427333] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.862 [2024-11-06 14:27:21.427339] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:53.862 [2024-11-06 14:27:21.427351] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.862 [2024-11-06 14:27:21.427357] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.862 [2024-11-06 14:27:21.427363] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:53.862 [2024-11-06 14:27:21.427372] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.862 [2024-11-06 14:27:21.427391] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:53.862 [2024-11-06 14:27:21.427442] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.862 [2024-11-06 14:27:21.427450] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.862 [2024-11-06 14:27:21.427455] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.862 [2024-11-06 14:27:21.427461] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:53.862 [2024-11-06 14:27:21.427477] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.862 [2024-11-06 14:27:21.427483] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.862 [2024-11-06 14:27:21.427488] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:53.862 [2024-11-06 14:27:21.427498] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.862 [2024-11-06 14:27:21.427521] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:53.862 [2024-11-06 14:27:21.427572] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.862 [2024-11-06 14:27:21.427580] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.862 [2024-11-06 14:27:21.427597] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.862 [2024-11-06 14:27:21.427603] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:53.862 [2024-11-06 14:27:21.427616] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.862 [2024-11-06 14:27:21.427628] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.862 [2024-11-06 14:27:21.427634] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:53.862 [2024-11-06 14:27:21.427649] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.862 [2024-11-06 14:27:21.427670] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:53.862 [2024-11-06 14:27:21.427729] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.862 [2024-11-06 14:27:21.427737] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.862 [2024-11-06 14:27:21.427745] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.862 [2024-11-06 14:27:21.427751] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:53.862 [2024-11-06 14:27:21.427774] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.862 [2024-11-06 14:27:21.427782] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.862 [2024-11-06 14:27:21.427787] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:53.862 [2024-11-06 14:27:21.427797] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.862 [2024-11-06 14:27:21.427817] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:53.862 [2024-11-06 14:27:21.431866] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.862 [2024-11-06 14:27:21.431890] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.862 [2024-11-06 14:27:21.431897] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.862 [2024-11-06 14:27:21.431909] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:53.862 [2024-11-06 14:27:21.431927] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:53.862 [2024-11-06 14:27:21.431933] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:53.862 [2024-11-06 14:27:21.431939] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:53.862 [2024-11-06 14:27:21.431951] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.862 [2024-11-06 14:27:21.431978] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:53.862 [2024-11-06 14:27:21.432053] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:53.862 [2024-11-06 14:27:21.432062] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:53.862 [2024-11-06 14:27:21.432067] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:53.862 [2024-11-06 14:27:21.432073] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:53.862 [2024-11-06 14:27:21.432084] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 7 milliseconds 00:22:53.862 00:22:54.122 14:27:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:22:54.122 [2024-11-06 14:27:21.559862] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:22:54.122 [2024-11-06 14:27:21.559945] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80312 ] 00:22:54.122 [2024-11-06 14:27:21.738251] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:22:54.122 [2024-11-06 14:27:21.738388] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:54.122 [2024-11-06 14:27:21.738399] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:54.122 [2024-11-06 14:27:21.738427] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:54.122 [2024-11-06 14:27:21.738442] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:54.122 [2024-11-06 14:27:21.738828] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:22:54.122 [2024-11-06 14:27:21.738910] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x61500000f080 0 00:22:54.122 [2024-11-06 14:27:21.747868] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:54.122 [2024-11-06 14:27:21.747898] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:54.122 [2024-11-06 14:27:21.747907] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:54.122 [2024-11-06 14:27:21.747913] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:54.122 [2024-11-06 14:27:21.747998] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.122 [2024-11-06 14:27:21.748009] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.122 [2024-11-06 14:27:21.748017] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:22:54.122 [2024-11-06 14:27:21.748039] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:54.122 [2024-11-06 14:27:21.748072] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:54.122 [2024-11-06 14:27:21.755865] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.122 [2024-11-06 14:27:21.755894] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.122 [2024-11-06 14:27:21.755901] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.122 [2024-11-06 14:27:21.755910] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:22:54.122 [2024-11-06 14:27:21.755934] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:54.122 [2024-11-06 14:27:21.755955] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:22:54.122 [2024-11-06 14:27:21.755965] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:22:54.122 [2024-11-06 14:27:21.755986] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.122 [2024-11-06 14:27:21.755994] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.122 [2024-11-06 14:27:21.756001] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:22:54.122 [2024-11-06 14:27:21.756015] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.122 [2024-11-06 14:27:21.756044] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:54.122 [2024-11-06 14:27:21.756142] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.122 [2024-11-06 14:27:21.756152] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.122 [2024-11-06 14:27:21.756162] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.122 [2024-11-06 14:27:21.756169] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:22:54.122 [2024-11-06 14:27:21.756179] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:22:54.122 [2024-11-06 14:27:21.756191] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:22:54.122 [2024-11-06 14:27:21.756204] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.122 [2024-11-06 14:27:21.756212] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.122 [2024-11-06 14:27:21.756219] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:22:54.122 [2024-11-06 14:27:21.756233] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.123 [2024-11-06 14:27:21.756252] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:54.123 [2024-11-06 14:27:21.756337] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.123 [2024-11-06 14:27:21.756345] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.123 [2024-11-06 14:27:21.756351] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.386 [2024-11-06 14:27:21.756357] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:22:54.386 [2024-11-06 14:27:21.756366] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:22:54.386 [2024-11-06 14:27:21.756383] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:22:54.386 [2024-11-06 14:27:21.756393] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.386 [2024-11-06 14:27:21.756399] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.386 [2024-11-06 14:27:21.756406] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:22:54.386 [2024-11-06 14:27:21.756416] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.386 [2024-11-06 14:27:21.756438] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:54.386 [2024-11-06 14:27:21.756496] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.386 [2024-11-06 14:27:21.756504] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.386 [2024-11-06 14:27:21.756510] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.386 [2024-11-06 14:27:21.756516] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:22:54.386 [2024-11-06 14:27:21.756524] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:54.386 [2024-11-06 14:27:21.756540] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.386 [2024-11-06 14:27:21.756548] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.386 [2024-11-06 14:27:21.756557] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:22:54.386 [2024-11-06 14:27:21.756568] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.386 [2024-11-06 14:27:21.756586] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:54.386 [2024-11-06 14:27:21.756635] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.386 [2024-11-06 14:27:21.756644] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.386 [2024-11-06 14:27:21.756652] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.386 [2024-11-06 14:27:21.756658] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:22:54.386 [2024-11-06 14:27:21.756666] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:22:54.386 [2024-11-06 14:27:21.756674] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:22:54.386 [2024-11-06 14:27:21.756693] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:54.386 [2024-11-06 14:27:21.756802] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:22:54.386 [2024-11-06 14:27:21.756811] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:54.386 [2024-11-06 14:27:21.756823] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.386 [2024-11-06 14:27:21.756830] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.386 [2024-11-06 14:27:21.756852] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:22:54.386 [2024-11-06 14:27:21.756868] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.386 [2024-11-06 14:27:21.756889] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:54.386 [2024-11-06 14:27:21.756958] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.386 [2024-11-06 14:27:21.756966] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.386 [2024-11-06 14:27:21.756972] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.386 [2024-11-06 14:27:21.756978] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:22:54.386 [2024-11-06 14:27:21.756987] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:54.386 [2024-11-06 14:27:21.757001] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.386 [2024-11-06 14:27:21.757013] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.386 [2024-11-06 14:27:21.757020] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:22:54.386 [2024-11-06 14:27:21.757031] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.386 [2024-11-06 14:27:21.757049] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:54.386 [2024-11-06 14:27:21.757100] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.386 [2024-11-06 14:27:21.757108] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.386 [2024-11-06 14:27:21.757114] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.386 [2024-11-06 14:27:21.757123] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:22:54.386 [2024-11-06 14:27:21.757131] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:54.386 [2024-11-06 14:27:21.757139] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:22:54.386 [2024-11-06 14:27:21.757163] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:22:54.386 [2024-11-06 14:27:21.757177] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:22:54.386 [2024-11-06 14:27:21.757194] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.386 [2024-11-06 14:27:21.757200] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:22:54.386 [2024-11-06 14:27:21.757212] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.386 [2024-11-06 14:27:21.757232] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:54.386 [2024-11-06 14:27:21.757375] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:54.386 [2024-11-06 14:27:21.757393] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:54.386 [2024-11-06 14:27:21.757399] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:54.386 [2024-11-06 14:27:21.757407] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=0 00:22:54.386 [2024-11-06 14:27:21.757415] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:22:54.387 [2024-11-06 14:27:21.757422] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.387 [2024-11-06 14:27:21.757435] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:54.387 [2024-11-06 14:27:21.757442] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:54.387 [2024-11-06 14:27:21.757456] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.387 [2024-11-06 14:27:21.757463] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.387 [2024-11-06 14:27:21.757469] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.387 [2024-11-06 14:27:21.757480] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:22:54.387 [2024-11-06 14:27:21.757501] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:22:54.387 [2024-11-06 14:27:21.757511] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:22:54.387 [2024-11-06 14:27:21.757518] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:22:54.387 [2024-11-06 14:27:21.757526] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:22:54.387 [2024-11-06 14:27:21.757533] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:22:54.387 [2024-11-06 14:27:21.757541] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:22:54.387 [2024-11-06 14:27:21.757553] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:22:54.387 [2024-11-06 14:27:21.757563] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.387 [2024-11-06 14:27:21.757570] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.387 [2024-11-06 14:27:21.757580] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:22:54.387 [2024-11-06 14:27:21.757592] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:54.387 [2024-11-06 14:27:21.757610] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:54.387 [2024-11-06 14:27:21.757693] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.387 [2024-11-06 14:27:21.757701] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.387 [2024-11-06 14:27:21.757707] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.387 [2024-11-06 14:27:21.757713] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:22:54.387 [2024-11-06 14:27:21.757730] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.387 [2024-11-06 14:27:21.757737] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.387 [2024-11-06 14:27:21.757744] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:22:54.387 [2024-11-06 14:27:21.757760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.387 [2024-11-06 14:27:21.757770] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.387 [2024-11-06 14:27:21.757776] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.387 [2024-11-06 14:27:21.757782] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x61500000f080) 00:22:54.387 [2024-11-06 14:27:21.757791] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.387 [2024-11-06 14:27:21.757799] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.387 [2024-11-06 14:27:21.757805] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.387 [2024-11-06 14:27:21.757810] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x61500000f080) 00:22:54.387 [2024-11-06 14:27:21.757819] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.387 [2024-11-06 14:27:21.757833] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.387 [2024-11-06 14:27:21.757852] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.387 [2024-11-06 14:27:21.757858] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:54.387 [2024-11-06 14:27:21.757867] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.387 [2024-11-06 14:27:21.757875] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:54.387 [2024-11-06 14:27:21.757893] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:54.387 [2024-11-06 14:27:21.757903] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.387 [2024-11-06 14:27:21.757912] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:22:54.387 [2024-11-06 14:27:21.757923] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.387 [2024-11-06 14:27:21.757948] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:54.387 [2024-11-06 14:27:21.757956] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:22:54.387 [2024-11-06 14:27:21.757962] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:22:54.387 [2024-11-06 14:27:21.757969] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:54.387 [2024-11-06 14:27:21.757979] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:22:54.387 [2024-11-06 14:27:21.758099] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.387 [2024-11-06 14:27:21.758108] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.387 [2024-11-06 14:27:21.758113] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.387 [2024-11-06 14:27:21.758119] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:22:54.387 [2024-11-06 14:27:21.758128] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:22:54.387 [2024-11-06 14:27:21.758137] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:22:54.387 [2024-11-06 14:27:21.758153] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:22:54.387 [2024-11-06 14:27:21.758162] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:22:54.387 [2024-11-06 14:27:21.758172] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.387 [2024-11-06 14:27:21.758179] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.387 [2024-11-06 14:27:21.758185] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:22:54.387 [2024-11-06 14:27:21.758200] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:54.387 [2024-11-06 14:27:21.758221] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:22:54.387 [2024-11-06 14:27:21.758285] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.387 [2024-11-06 14:27:21.758294] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.387 [2024-11-06 14:27:21.758300] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.387 [2024-11-06 14:27:21.758305] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:22:54.387 [2024-11-06 14:27:21.758386] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:22:54.387 [2024-11-06 14:27:21.758404] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:22:54.387 [2024-11-06 14:27:21.758419] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.387 [2024-11-06 14:27:21.758426] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:22:54.387 [2024-11-06 14:27:21.758437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.387 [2024-11-06 14:27:21.758465] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:22:54.387 [2024-11-06 14:27:21.758570] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:54.387 [2024-11-06 14:27:21.758578] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:54.387 [2024-11-06 14:27:21.758584] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:54.387 [2024-11-06 14:27:21.758591] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=4 00:22:54.387 [2024-11-06 14:27:21.758598] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:22:54.387 [2024-11-06 14:27:21.758605] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.387 [2024-11-06 14:27:21.758621] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:54.387 [2024-11-06 14:27:21.758628] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:54.387 [2024-11-06 14:27:21.758638] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.387 [2024-11-06 14:27:21.758646] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.387 [2024-11-06 14:27:21.758651] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.387 [2024-11-06 14:27:21.758657] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:22:54.387 [2024-11-06 14:27:21.758689] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:22:54.387 [2024-11-06 14:27:21.758709] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:22:54.387 [2024-11-06 14:27:21.758732] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:22:54.387 [2024-11-06 14:27:21.758748] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.387 [2024-11-06 14:27:21.758760] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:22:54.387 [2024-11-06 14:27:21.758775] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.387 [2024-11-06 14:27:21.758794] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:22:54.387 [2024-11-06 14:27:21.758930] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:54.387 [2024-11-06 14:27:21.758943] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:54.387 [2024-11-06 14:27:21.758948] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:54.387 [2024-11-06 14:27:21.758955] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=4 00:22:54.387 [2024-11-06 14:27:21.758961] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:22:54.387 [2024-11-06 14:27:21.758968] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.387 [2024-11-06 14:27:21.758977] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:54.387 [2024-11-06 14:27:21.758983] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:54.388 [2024-11-06 14:27:21.759003] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.388 [2024-11-06 14:27:21.759013] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.388 [2024-11-06 14:27:21.759018] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.388 [2024-11-06 14:27:21.759025] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:22:54.388 [2024-11-06 14:27:21.759060] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:22:54.388 [2024-11-06 14:27:21.759079] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:22:54.388 [2024-11-06 14:27:21.759092] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.388 [2024-11-06 14:27:21.759099] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:22:54.388 [2024-11-06 14:27:21.759110] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.388 [2024-11-06 14:27:21.759133] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:22:54.388 [2024-11-06 14:27:21.759217] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:54.388 [2024-11-06 14:27:21.759226] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:54.388 [2024-11-06 14:27:21.759231] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:54.388 [2024-11-06 14:27:21.759237] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=4 00:22:54.388 [2024-11-06 14:27:21.759244] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:22:54.388 [2024-11-06 14:27:21.759250] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.388 [2024-11-06 14:27:21.759262] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:54.388 [2024-11-06 14:27:21.759268] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:54.388 [2024-11-06 14:27:21.759290] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.388 [2024-11-06 14:27:21.759298] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.388 [2024-11-06 14:27:21.759303] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.388 [2024-11-06 14:27:21.759309] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:22:54.388 [2024-11-06 14:27:21.759337] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:22:54.388 [2024-11-06 14:27:21.759349] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:22:54.388 [2024-11-06 14:27:21.759365] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:22:54.388 [2024-11-06 14:27:21.759378] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:22:54.388 [2024-11-06 14:27:21.759386] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:22:54.388 [2024-11-06 14:27:21.759395] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:22:54.388 [2024-11-06 14:27:21.759403] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:22:54.388 [2024-11-06 14:27:21.759414] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:22:54.388 [2024-11-06 14:27:21.759422] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:22:54.388 [2024-11-06 14:27:21.759457] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.388 [2024-11-06 14:27:21.759464] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:22:54.388 [2024-11-06 14:27:21.759475] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.388 [2024-11-06 14:27:21.759485] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.388 [2024-11-06 14:27:21.759492] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.388 [2024-11-06 14:27:21.759498] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:22:54.388 [2024-11-06 14:27:21.759511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.388 [2024-11-06 14:27:21.759535] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:22:54.388 [2024-11-06 14:27:21.759543] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:22:54.388 [2024-11-06 14:27:21.759646] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.388 [2024-11-06 14:27:21.759655] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.388 [2024-11-06 14:27:21.759661] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.388 [2024-11-06 14:27:21.759667] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:22:54.388 [2024-11-06 14:27:21.759678] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.388 [2024-11-06 14:27:21.759686] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.388 [2024-11-06 14:27:21.759691] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.388 [2024-11-06 14:27:21.759697] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:22:54.388 [2024-11-06 14:27:21.759710] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.388 [2024-11-06 14:27:21.759716] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:22:54.388 [2024-11-06 14:27:21.759728] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.388 [2024-11-06 14:27:21.759746] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:22:54.388 [2024-11-06 14:27:21.759807] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.388 [2024-11-06 14:27:21.759820] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.388 [2024-11-06 14:27:21.759826] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.388 [2024-11-06 14:27:21.759832] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:22:54.388 [2024-11-06 14:27:21.763880] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.388 [2024-11-06 14:27:21.763891] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:22:54.388 [2024-11-06 14:27:21.763904] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.388 [2024-11-06 14:27:21.763930] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:22:54.388 [2024-11-06 14:27:21.764036] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.388 [2024-11-06 14:27:21.764047] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.388 [2024-11-06 14:27:21.764053] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.388 [2024-11-06 14:27:21.764059] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:22:54.388 [2024-11-06 14:27:21.764075] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.388 [2024-11-06 14:27:21.764082] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:22:54.388 [2024-11-06 14:27:21.764095] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.388 [2024-11-06 14:27:21.764115] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:22:54.388 [2024-11-06 14:27:21.764177] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.388 [2024-11-06 14:27:21.764185] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.388 [2024-11-06 14:27:21.764190] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.388 [2024-11-06 14:27:21.764196] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:22:54.388 [2024-11-06 14:27:21.764223] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.388 [2024-11-06 14:27:21.764230] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:22:54.388 [2024-11-06 14:27:21.764241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.388 [2024-11-06 14:27:21.764253] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.388 [2024-11-06 14:27:21.764260] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:22:54.388 [2024-11-06 14:27:21.764276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.388 [2024-11-06 14:27:21.764287] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.388 [2024-11-06 14:27:21.764294] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x61500000f080) 00:22:54.388 [2024-11-06 14:27:21.764304] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.388 [2024-11-06 14:27:21.764318] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.388 [2024-11-06 14:27:21.764324] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x61500000f080) 00:22:54.388 [2024-11-06 14:27:21.764334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.388 [2024-11-06 14:27:21.764354] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:22:54.388 [2024-11-06 14:27:21.764362] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:22:54.388 [2024-11-06 14:27:21.764369] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001ba00, cid 6, qid 0 00:22:54.388 [2024-11-06 14:27:21.764375] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:22:54.388 [2024-11-06 14:27:21.764598] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:54.388 [2024-11-06 14:27:21.764617] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:54.388 [2024-11-06 14:27:21.764623] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:54.388 [2024-11-06 14:27:21.764633] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=8192, cccid=5 00:22:54.388 [2024-11-06 14:27:21.764641] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b880) on tqpair(0x61500000f080): expected_datao=0, payload_size=8192 00:22:54.388 [2024-11-06 14:27:21.764654] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.388 [2024-11-06 14:27:21.764681] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:54.388 [2024-11-06 14:27:21.764688] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:54.388 [2024-11-06 14:27:21.764696] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:54.388 [2024-11-06 14:27:21.764704] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:54.389 [2024-11-06 14:27:21.764709] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:54.389 [2024-11-06 14:27:21.764715] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=512, cccid=4 00:22:54.389 [2024-11-06 14:27:21.764722] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=512 00:22:54.389 [2024-11-06 14:27:21.764728] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.389 [2024-11-06 14:27:21.764742] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:54.389 [2024-11-06 14:27:21.764747] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:54.389 [2024-11-06 14:27:21.764755] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:54.389 [2024-11-06 14:27:21.764762] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:54.389 [2024-11-06 14:27:21.764768] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:54.389 [2024-11-06 14:27:21.764774] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=512, cccid=6 00:22:54.389 [2024-11-06 14:27:21.764780] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001ba00) on tqpair(0x61500000f080): expected_datao=0, payload_size=512 00:22:54.389 [2024-11-06 14:27:21.764786] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.389 [2024-11-06 14:27:21.764798] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:54.389 [2024-11-06 14:27:21.764804] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:54.389 [2024-11-06 14:27:21.764814] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:54.389 [2024-11-06 14:27:21.764821] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:54.389 [2024-11-06 14:27:21.764827] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:54.389 [2024-11-06 14:27:21.764833] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=7 00:22:54.389 [2024-11-06 14:27:21.764858] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001bb80) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:22:54.389 [2024-11-06 14:27:21.764864] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.389 [2024-11-06 14:27:21.764874] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:54.389 [2024-11-06 14:27:21.764879] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:54.389 [2024-11-06 14:27:21.764887] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.389 [2024-11-06 14:27:21.764894] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.389 [2024-11-06 14:27:21.764899] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.389 [2024-11-06 14:27:21.764909] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:22:54.389 [2024-11-06 14:27:21.764935] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.389 [2024-11-06 14:27:21.764943] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.389 [2024-11-06 14:27:21.764948] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.389 [2024-11-06 14:27:21.764954] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:22:54.389 [2024-11-06 14:27:21.764968] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.389 [2024-11-06 14:27:21.764976] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.389 [2024-11-06 14:27:21.764981] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.389 [2024-11-06 14:27:21.764987] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001ba00) on tqpair=0x61500000f080 00:22:54.389 [2024-11-06 14:27:21.764998] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.389 [2024-11-06 14:27:21.765008] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.389 [2024-11-06 14:27:21.765014] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.389 [2024-11-06 14:27:21.765020] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x61500000f080 00:22:54.389 ===================================================== 00:22:54.389 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:22:54.389 ===================================================== 00:22:54.389 Controller Capabilities/Features 00:22:54.389 ================================ 00:22:54.389 Vendor ID: 8086 00:22:54.389 Subsystem Vendor ID: 8086 00:22:54.389 Serial Number: SPDK00000000000001 00:22:54.389 Model Number: SPDK bdev Controller 00:22:54.389 Firmware Version: 25.01 00:22:54.389 Recommended Arb Burst: 6 00:22:54.389 IEEE OUI Identifier: e4 d2 5c 00:22:54.389 Multi-path I/O 00:22:54.389 May have multiple subsystem ports: Yes 00:22:54.389 May have multiple controllers: Yes 00:22:54.389 Associated with SR-IOV VF: No 00:22:54.389 Max Data Transfer Size: 131072 00:22:54.389 Max Number of Namespaces: 32 00:22:54.389 Max Number of I/O Queues: 127 00:22:54.389 NVMe Specification Version (VS): 1.3 00:22:54.389 NVMe Specification Version (Identify): 1.3 00:22:54.389 Maximum Queue Entries: 128 00:22:54.389 Contiguous Queues Required: Yes 00:22:54.389 Arbitration Mechanisms Supported 00:22:54.389 Weighted Round Robin: Not Supported 00:22:54.389 Vendor Specific: Not Supported 00:22:54.389 Reset Timeout: 15000 ms 00:22:54.389 Doorbell Stride: 4 bytes 00:22:54.389 NVM Subsystem Reset: Not Supported 00:22:54.389 Command Sets Supported 00:22:54.389 NVM Command Set: Supported 00:22:54.389 Boot Partition: Not Supported 00:22:54.389 Memory Page Size Minimum: 4096 bytes 00:22:54.389 Memory Page Size Maximum: 4096 bytes 00:22:54.389 Persistent Memory Region: Not Supported 00:22:54.389 Optional Asynchronous Events Supported 00:22:54.389 Namespace Attribute Notices: Supported 00:22:54.389 Firmware Activation Notices: Not Supported 00:22:54.389 ANA Change Notices: Not Supported 00:22:54.389 PLE Aggregate Log Change Notices: Not Supported 00:22:54.389 LBA Status Info Alert Notices: Not Supported 00:22:54.389 EGE Aggregate Log Change Notices: Not Supported 00:22:54.389 Normal NVM Subsystem Shutdown event: Not Supported 00:22:54.389 Zone Descriptor Change Notices: Not Supported 00:22:54.389 Discovery Log Change Notices: Not Supported 00:22:54.389 Controller Attributes 00:22:54.389 128-bit Host Identifier: Supported 00:22:54.389 Non-Operational Permissive Mode: Not Supported 00:22:54.389 NVM Sets: Not Supported 00:22:54.389 Read Recovery Levels: Not Supported 00:22:54.389 Endurance Groups: Not Supported 00:22:54.389 Predictable Latency Mode: Not Supported 00:22:54.389 Traffic Based Keep ALive: Not Supported 00:22:54.389 Namespace Granularity: Not Supported 00:22:54.389 SQ Associations: Not Supported 00:22:54.389 UUID List: Not Supported 00:22:54.389 Multi-Domain Subsystem: Not Supported 00:22:54.389 Fixed Capacity Management: Not Supported 00:22:54.389 Variable Capacity Management: Not Supported 00:22:54.389 Delete Endurance Group: Not Supported 00:22:54.389 Delete NVM Set: Not Supported 00:22:54.389 Extended LBA Formats Supported: Not Supported 00:22:54.389 Flexible Data Placement Supported: Not Supported 00:22:54.389 00:22:54.389 Controller Memory Buffer Support 00:22:54.389 ================================ 00:22:54.389 Supported: No 00:22:54.389 00:22:54.389 Persistent Memory Region Support 00:22:54.389 ================================ 00:22:54.389 Supported: No 00:22:54.389 00:22:54.389 Admin Command Set Attributes 00:22:54.389 ============================ 00:22:54.389 Security Send/Receive: Not Supported 00:22:54.389 Format NVM: Not Supported 00:22:54.389 Firmware Activate/Download: Not Supported 00:22:54.389 Namespace Management: Not Supported 00:22:54.389 Device Self-Test: Not Supported 00:22:54.389 Directives: Not Supported 00:22:54.389 NVMe-MI: Not Supported 00:22:54.389 Virtualization Management: Not Supported 00:22:54.389 Doorbell Buffer Config: Not Supported 00:22:54.389 Get LBA Status Capability: Not Supported 00:22:54.389 Command & Feature Lockdown Capability: Not Supported 00:22:54.389 Abort Command Limit: 4 00:22:54.389 Async Event Request Limit: 4 00:22:54.389 Number of Firmware Slots: N/A 00:22:54.389 Firmware Slot 1 Read-Only: N/A 00:22:54.389 Firmware Activation Without Reset: N/A 00:22:54.389 Multiple Update Detection Support: N/A 00:22:54.389 Firmware Update Granularity: No Information Provided 00:22:54.389 Per-Namespace SMART Log: No 00:22:54.389 Asymmetric Namespace Access Log Page: Not Supported 00:22:54.389 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:22:54.389 Command Effects Log Page: Supported 00:22:54.389 Get Log Page Extended Data: Supported 00:22:54.389 Telemetry Log Pages: Not Supported 00:22:54.389 Persistent Event Log Pages: Not Supported 00:22:54.389 Supported Log Pages Log Page: May Support 00:22:54.389 Commands Supported & Effects Log Page: Not Supported 00:22:54.389 Feature Identifiers & Effects Log Page:May Support 00:22:54.389 NVMe-MI Commands & Effects Log Page: May Support 00:22:54.389 Data Area 4 for Telemetry Log: Not Supported 00:22:54.389 Error Log Page Entries Supported: 128 00:22:54.389 Keep Alive: Supported 00:22:54.389 Keep Alive Granularity: 10000 ms 00:22:54.389 00:22:54.389 NVM Command Set Attributes 00:22:54.389 ========================== 00:22:54.389 Submission Queue Entry Size 00:22:54.389 Max: 64 00:22:54.389 Min: 64 00:22:54.389 Completion Queue Entry Size 00:22:54.389 Max: 16 00:22:54.389 Min: 16 00:22:54.389 Number of Namespaces: 32 00:22:54.389 Compare Command: Supported 00:22:54.389 Write Uncorrectable Command: Not Supported 00:22:54.389 Dataset Management Command: Supported 00:22:54.389 Write Zeroes Command: Supported 00:22:54.389 Set Features Save Field: Not Supported 00:22:54.389 Reservations: Supported 00:22:54.389 Timestamp: Not Supported 00:22:54.389 Copy: Supported 00:22:54.389 Volatile Write Cache: Present 00:22:54.389 Atomic Write Unit (Normal): 1 00:22:54.390 Atomic Write Unit (PFail): 1 00:22:54.390 Atomic Compare & Write Unit: 1 00:22:54.390 Fused Compare & Write: Supported 00:22:54.390 Scatter-Gather List 00:22:54.390 SGL Command Set: Supported 00:22:54.390 SGL Keyed: Supported 00:22:54.390 SGL Bit Bucket Descriptor: Not Supported 00:22:54.390 SGL Metadata Pointer: Not Supported 00:22:54.390 Oversized SGL: Not Supported 00:22:54.390 SGL Metadata Address: Not Supported 00:22:54.390 SGL Offset: Supported 00:22:54.390 Transport SGL Data Block: Not Supported 00:22:54.390 Replay Protected Memory Block: Not Supported 00:22:54.390 00:22:54.390 Firmware Slot Information 00:22:54.390 ========================= 00:22:54.390 Active slot: 1 00:22:54.390 Slot 1 Firmware Revision: 25.01 00:22:54.390 00:22:54.390 00:22:54.390 Commands Supported and Effects 00:22:54.390 ============================== 00:22:54.390 Admin Commands 00:22:54.390 -------------- 00:22:54.390 Get Log Page (02h): Supported 00:22:54.390 Identify (06h): Supported 00:22:54.390 Abort (08h): Supported 00:22:54.390 Set Features (09h): Supported 00:22:54.390 Get Features (0Ah): Supported 00:22:54.390 Asynchronous Event Request (0Ch): Supported 00:22:54.390 Keep Alive (18h): Supported 00:22:54.390 I/O Commands 00:22:54.390 ------------ 00:22:54.390 Flush (00h): Supported LBA-Change 00:22:54.390 Write (01h): Supported LBA-Change 00:22:54.390 Read (02h): Supported 00:22:54.390 Compare (05h): Supported 00:22:54.390 Write Zeroes (08h): Supported LBA-Change 00:22:54.390 Dataset Management (09h): Supported LBA-Change 00:22:54.390 Copy (19h): Supported LBA-Change 00:22:54.390 00:22:54.390 Error Log 00:22:54.390 ========= 00:22:54.390 00:22:54.390 Arbitration 00:22:54.390 =========== 00:22:54.390 Arbitration Burst: 1 00:22:54.390 00:22:54.390 Power Management 00:22:54.390 ================ 00:22:54.390 Number of Power States: 1 00:22:54.390 Current Power State: Power State #0 00:22:54.390 Power State #0: 00:22:54.390 Max Power: 0.00 W 00:22:54.390 Non-Operational State: Operational 00:22:54.390 Entry Latency: Not Reported 00:22:54.390 Exit Latency: Not Reported 00:22:54.390 Relative Read Throughput: 0 00:22:54.390 Relative Read Latency: 0 00:22:54.390 Relative Write Throughput: 0 00:22:54.390 Relative Write Latency: 0 00:22:54.390 Idle Power: Not Reported 00:22:54.390 Active Power: Not Reported 00:22:54.390 Non-Operational Permissive Mode: Not Supported 00:22:54.390 00:22:54.390 Health Information 00:22:54.390 ================== 00:22:54.390 Critical Warnings: 00:22:54.390 Available Spare Space: OK 00:22:54.390 Temperature: OK 00:22:54.390 Device Reliability: OK 00:22:54.390 Read Only: No 00:22:54.390 Volatile Memory Backup: OK 00:22:54.390 Current Temperature: 0 Kelvin (-273 Celsius) 00:22:54.390 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:22:54.390 Available Spare: 0% 00:22:54.390 Available Spare Threshold: 0% 00:22:54.390 Life Percentage Used:[2024-11-06 14:27:21.765170] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.390 [2024-11-06 14:27:21.765178] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x61500000f080) 00:22:54.390 [2024-11-06 14:27:21.765191] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.390 [2024-11-06 14:27:21.765216] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:22:54.390 [2024-11-06 14:27:21.765295] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.390 [2024-11-06 14:27:21.765304] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.390 [2024-11-06 14:27:21.765310] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.390 [2024-11-06 14:27:21.765317] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x61500000f080 00:22:54.390 [2024-11-06 14:27:21.765413] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:22:54.390 [2024-11-06 14:27:21.765438] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:22:54.390 [2024-11-06 14:27:21.765453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.390 [2024-11-06 14:27:21.765462] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x61500000f080 00:22:54.390 [2024-11-06 14:27:21.765471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.390 [2024-11-06 14:27:21.765479] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x61500000f080 00:22:54.390 [2024-11-06 14:27:21.765487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.390 [2024-11-06 14:27:21.765494] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:54.390 [2024-11-06 14:27:21.765502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.390 [2024-11-06 14:27:21.765515] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.390 [2024-11-06 14:27:21.765523] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.390 [2024-11-06 14:27:21.765529] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:54.390 [2024-11-06 14:27:21.765541] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.390 [2024-11-06 14:27:21.765570] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:54.390 [2024-11-06 14:27:21.765648] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.390 [2024-11-06 14:27:21.765658] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.390 [2024-11-06 14:27:21.765664] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.390 [2024-11-06 14:27:21.765671] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:54.390 [2024-11-06 14:27:21.765682] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.390 [2024-11-06 14:27:21.765695] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.390 [2024-11-06 14:27:21.765702] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:54.390 [2024-11-06 14:27:21.765713] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.390 [2024-11-06 14:27:21.765738] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:54.390 [2024-11-06 14:27:21.765854] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.390 [2024-11-06 14:27:21.765864] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.390 [2024-11-06 14:27:21.765870] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.390 [2024-11-06 14:27:21.765875] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:54.390 [2024-11-06 14:27:21.765884] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:22:54.390 [2024-11-06 14:27:21.765892] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:22:54.390 [2024-11-06 14:27:21.765906] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.390 [2024-11-06 14:27:21.765913] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.390 [2024-11-06 14:27:21.765919] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:54.390 [2024-11-06 14:27:21.765933] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.390 [2024-11-06 14:27:21.765953] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:54.390 [2024-11-06 14:27:21.766013] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.390 [2024-11-06 14:27:21.766021] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.390 [2024-11-06 14:27:21.766027] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.390 [2024-11-06 14:27:21.766033] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:54.390 [2024-11-06 14:27:21.766049] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.390 [2024-11-06 14:27:21.766055] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.390 [2024-11-06 14:27:21.766061] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:54.390 [2024-11-06 14:27:21.766070] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.390 [2024-11-06 14:27:21.766087] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:54.390 [2024-11-06 14:27:21.766160] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.390 [2024-11-06 14:27:21.766168] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.391 [2024-11-06 14:27:21.766174] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.391 [2024-11-06 14:27:21.766180] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:54.391 [2024-11-06 14:27:21.766192] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.391 [2024-11-06 14:27:21.766198] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.391 [2024-11-06 14:27:21.766204] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:54.391 [2024-11-06 14:27:21.766213] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.391 [2024-11-06 14:27:21.766229] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:54.391 [2024-11-06 14:27:21.766294] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.391 [2024-11-06 14:27:21.766302] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.391 [2024-11-06 14:27:21.766308] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.391 [2024-11-06 14:27:21.766313] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:54.391 [2024-11-06 14:27:21.766326] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.391 [2024-11-06 14:27:21.766332] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.391 [2024-11-06 14:27:21.766340] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:54.391 [2024-11-06 14:27:21.766353] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.391 [2024-11-06 14:27:21.766369] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:54.391 [2024-11-06 14:27:21.766434] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.391 [2024-11-06 14:27:21.766445] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.391 [2024-11-06 14:27:21.766460] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.391 [2024-11-06 14:27:21.766466] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:54.391 [2024-11-06 14:27:21.766479] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.391 [2024-11-06 14:27:21.766485] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.391 [2024-11-06 14:27:21.766491] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:54.391 [2024-11-06 14:27:21.766501] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.391 [2024-11-06 14:27:21.766518] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:54.391 [2024-11-06 14:27:21.766590] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.391 [2024-11-06 14:27:21.766599] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.391 [2024-11-06 14:27:21.766604] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.391 [2024-11-06 14:27:21.766610] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:54.391 [2024-11-06 14:27:21.766623] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.391 [2024-11-06 14:27:21.766629] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.391 [2024-11-06 14:27:21.766635] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:54.391 [2024-11-06 14:27:21.766644] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.391 [2024-11-06 14:27:21.766665] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:54.391 [2024-11-06 14:27:21.766722] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.391 [2024-11-06 14:27:21.766730] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.391 [2024-11-06 14:27:21.766735] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.391 [2024-11-06 14:27:21.766741] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:54.391 [2024-11-06 14:27:21.766756] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.391 [2024-11-06 14:27:21.766762] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.391 [2024-11-06 14:27:21.766768] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:54.391 [2024-11-06 14:27:21.766777] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.391 [2024-11-06 14:27:21.766793] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:54.391 [2024-11-06 14:27:21.766879] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.391 [2024-11-06 14:27:21.766887] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.391 [2024-11-06 14:27:21.766893] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.391 [2024-11-06 14:27:21.766899] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:54.391 [2024-11-06 14:27:21.766912] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.391 [2024-11-06 14:27:21.766918] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.391 [2024-11-06 14:27:21.766923] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:54.391 [2024-11-06 14:27:21.766933] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.391 [2024-11-06 14:27:21.766950] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:54.391 [2024-11-06 14:27:21.767026] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.391 [2024-11-06 14:27:21.767034] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.391 [2024-11-06 14:27:21.767039] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.391 [2024-11-06 14:27:21.767045] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:54.391 [2024-11-06 14:27:21.767057] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.391 [2024-11-06 14:27:21.767066] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.391 [2024-11-06 14:27:21.767072] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:54.391 [2024-11-06 14:27:21.767085] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.391 [2024-11-06 14:27:21.767101] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:54.391 [2024-11-06 14:27:21.767179] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.391 [2024-11-06 14:27:21.767187] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.391 [2024-11-06 14:27:21.767192] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.391 [2024-11-06 14:27:21.767198] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:54.391 [2024-11-06 14:27:21.767210] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.391 [2024-11-06 14:27:21.767217] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.391 [2024-11-06 14:27:21.767222] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:54.391 [2024-11-06 14:27:21.767231] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.391 [2024-11-06 14:27:21.767248] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:54.391 [2024-11-06 14:27:21.767322] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.391 [2024-11-06 14:27:21.767330] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.391 [2024-11-06 14:27:21.767336] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.391 [2024-11-06 14:27:21.767342] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:54.391 [2024-11-06 14:27:21.767354] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.391 [2024-11-06 14:27:21.767360] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.391 [2024-11-06 14:27:21.767371] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:54.391 [2024-11-06 14:27:21.767381] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.391 [2024-11-06 14:27:21.767397] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:54.391 [2024-11-06 14:27:21.767483] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.391 [2024-11-06 14:27:21.767492] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.391 [2024-11-06 14:27:21.767497] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.391 [2024-11-06 14:27:21.767503] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:54.391 [2024-11-06 14:27:21.767515] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.391 [2024-11-06 14:27:21.767530] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.391 [2024-11-06 14:27:21.767536] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:54.391 [2024-11-06 14:27:21.767552] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.391 [2024-11-06 14:27:21.767572] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:54.391 [2024-11-06 14:27:21.767639] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.391 [2024-11-06 14:27:21.767647] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.391 [2024-11-06 14:27:21.767652] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.391 [2024-11-06 14:27:21.767658] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:54.391 [2024-11-06 14:27:21.767671] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.391 [2024-11-06 14:27:21.767677] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.391 [2024-11-06 14:27:21.767682] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:54.391 [2024-11-06 14:27:21.767695] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.391 [2024-11-06 14:27:21.767712] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:54.391 [2024-11-06 14:27:21.767771] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.391 [2024-11-06 14:27:21.767779] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.391 [2024-11-06 14:27:21.767785] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.391 [2024-11-06 14:27:21.767790] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:54.391 [2024-11-06 14:27:21.767807] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:54.391 [2024-11-06 14:27:21.767813] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:54.392 [2024-11-06 14:27:21.767819] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:54.392 [2024-11-06 14:27:21.767828] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.392 [2024-11-06 14:27:21.771874] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:54.392 [2024-11-06 14:27:21.771951] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:54.392 [2024-11-06 14:27:21.771966] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:54.392 [2024-11-06 14:27:21.771972] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:54.392 [2024-11-06 14:27:21.771978] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:54.392 [2024-11-06 14:27:21.771992] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 6 milliseconds 00:22:54.392 0% 00:22:54.392 Data Units Read: 0 00:22:54.392 Data Units Written: 0 00:22:54.392 Host Read Commands: 0 00:22:54.392 Host Write Commands: 0 00:22:54.392 Controller Busy Time: 0 minutes 00:22:54.392 Power Cycles: 0 00:22:54.392 Power On Hours: 0 hours 00:22:54.392 Unsafe Shutdowns: 0 00:22:54.392 Unrecoverable Media Errors: 0 00:22:54.392 Lifetime Error Log Entries: 0 00:22:54.392 Warning Temperature Time: 0 minutes 00:22:54.392 Critical Temperature Time: 0 minutes 00:22:54.392 00:22:54.392 Number of Queues 00:22:54.392 ================ 00:22:54.392 Number of I/O Submission Queues: 127 00:22:54.392 Number of I/O Completion Queues: 127 00:22:54.392 00:22:54.392 Active Namespaces 00:22:54.392 ================= 00:22:54.392 Namespace ID:1 00:22:54.392 Error Recovery Timeout: Unlimited 00:22:54.392 Command Set Identifier: NVM (00h) 00:22:54.392 Deallocate: Supported 00:22:54.392 Deallocated/Unwritten Error: Not Supported 00:22:54.392 Deallocated Read Value: Unknown 00:22:54.392 Deallocate in Write Zeroes: Not Supported 00:22:54.392 Deallocated Guard Field: 0xFFFF 00:22:54.392 Flush: Supported 00:22:54.392 Reservation: Supported 00:22:54.392 Namespace Sharing Capabilities: Multiple Controllers 00:22:54.392 Size (in LBAs): 131072 (0GiB) 00:22:54.392 Capacity (in LBAs): 131072 (0GiB) 00:22:54.392 Utilization (in LBAs): 131072 (0GiB) 00:22:54.392 NGUID: ABCDEF0123456789ABCDEF0123456789 00:22:54.392 EUI64: ABCDEF0123456789 00:22:54.392 UUID: 43dc943e-640d-434a-b7a5-73e1f05c7344 00:22:54.392 Thin Provisioning: Not Supported 00:22:54.392 Per-NS Atomic Units: Yes 00:22:54.392 Atomic Boundary Size (Normal): 0 00:22:54.392 Atomic Boundary Size (PFail): 0 00:22:54.392 Atomic Boundary Offset: 0 00:22:54.392 Maximum Single Source Range Length: 65535 00:22:54.392 Maximum Copy Length: 65535 00:22:54.392 Maximum Source Range Count: 1 00:22:54.392 NGUID/EUI64 Never Reused: No 00:22:54.392 Namespace Write Protected: No 00:22:54.392 Number of LBA Formats: 1 00:22:54.392 Current LBA Format: LBA Format #00 00:22:54.392 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:54.392 00:22:54.392 14:27:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:22:54.392 14:27:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:54.392 14:27:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.392 14:27:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:54.392 14:27:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.392 14:27:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:22:54.392 14:27:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:22:54.392 14:27:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:54.392 14:27:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:22:54.392 14:27:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:54.392 14:27:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:22:54.392 14:27:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:54.392 14:27:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:54.392 rmmod nvme_tcp 00:22:54.392 rmmod nvme_fabrics 00:22:54.392 rmmod nvme_keyring 00:22:54.392 14:27:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:54.392 14:27:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:22:54.392 14:27:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:22:54.392 14:27:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 80274 ']' 00:22:54.392 14:27:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 80274 00:22:54.392 14:27:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@952 -- # '[' -z 80274 ']' 00:22:54.392 14:27:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # kill -0 80274 00:22:54.392 14:27:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # uname 00:22:54.392 14:27:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:54.392 14:27:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80274 00:22:54.652 killing process with pid 80274 00:22:54.652 14:27:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:54.652 14:27:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:54.652 14:27:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80274' 00:22:54.652 14:27:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@971 -- # kill 80274 00:22:54.652 14:27:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@976 -- # wait 80274 00:22:56.031 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:56.031 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:56.031 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:56.031 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:22:56.031 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:22:56.031 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:56.031 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:22:56.031 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:56.031 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:56.031 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:56.031 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:56.031 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:56.031 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:56.031 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:56.031 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:56.031 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:56.031 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:56.031 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:56.031 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:56.290 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:56.290 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:56.290 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:56.290 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:56.290 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:56.290 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:56.290 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:56.290 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:22:56.290 00:22:56.290 real 0m4.691s 00:22:56.290 user 0m11.713s 00:22:56.290 sys 0m1.379s 00:22:56.290 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:56.290 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:56.290 ************************************ 00:22:56.290 END TEST nvmf_identify 00:22:56.290 ************************************ 00:22:56.290 14:27:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:56.290 14:27:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:22:56.290 14:27:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:56.290 14:27:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:56.290 ************************************ 00:22:56.290 START TEST nvmf_perf 00:22:56.290 ************************************ 00:22:56.290 14:27:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:56.550 * Looking for test storage... 00:22:56.550 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:56.550 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:56.550 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:56.550 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lcov --version 00:22:56.550 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:56.550 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:56.550 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:56.550 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:56.550 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:22:56.550 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:22:56.550 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:22:56.550 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:22:56.550 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:22:56.550 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:22:56.550 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:22:56.550 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:56.550 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:22:56.550 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:22:56.550 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:56.550 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:56.550 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:22:56.550 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:22:56.550 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:56.550 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:22:56.550 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:22:56.550 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:22:56.550 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:22:56.550 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:56.550 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:22:56.550 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:22:56.550 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:56.550 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:56.550 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:22:56.550 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:56.550 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:56.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:56.550 --rc genhtml_branch_coverage=1 00:22:56.550 --rc genhtml_function_coverage=1 00:22:56.550 --rc genhtml_legend=1 00:22:56.550 --rc geninfo_all_blocks=1 00:22:56.550 --rc geninfo_unexecuted_blocks=1 00:22:56.550 00:22:56.550 ' 00:22:56.550 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:56.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:56.550 --rc genhtml_branch_coverage=1 00:22:56.550 --rc genhtml_function_coverage=1 00:22:56.550 --rc genhtml_legend=1 00:22:56.550 --rc geninfo_all_blocks=1 00:22:56.550 --rc geninfo_unexecuted_blocks=1 00:22:56.551 00:22:56.551 ' 00:22:56.551 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:56.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:56.551 --rc genhtml_branch_coverage=1 00:22:56.551 --rc genhtml_function_coverage=1 00:22:56.551 --rc genhtml_legend=1 00:22:56.551 --rc geninfo_all_blocks=1 00:22:56.551 --rc geninfo_unexecuted_blocks=1 00:22:56.551 00:22:56.551 ' 00:22:56.551 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:56.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:56.551 --rc genhtml_branch_coverage=1 00:22:56.551 --rc genhtml_function_coverage=1 00:22:56.551 --rc genhtml_legend=1 00:22:56.551 --rc geninfo_all_blocks=1 00:22:56.551 --rc geninfo_unexecuted_blocks=1 00:22:56.551 00:22:56.551 ' 00:22:56.551 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:56.551 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:22:56.551 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:56.551 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:56.551 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:56.551 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:56.551 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:56.551 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:56.551 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:56.551 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:56.551 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:56.551 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:56.551 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:22:56.551 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:22:56.551 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:56.551 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:56.551 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:56.551 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:56.551 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:56.551 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:22:56.551 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:56.551 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:56.551 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:56.551 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.551 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.551 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.551 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:22:56.551 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.551 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:22:56.551 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:56.551 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:56.551 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:56.551 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:56.551 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:56.551 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:56.551 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:56.551 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:56.551 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:56.551 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:56.551 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:56.551 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:56.551 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:56.551 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:22:56.551 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:56.551 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:56.551 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:56.551 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:56.551 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:56.551 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:56.551 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:56.551 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:56.551 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:22:56.551 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:22:56.551 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:22:56.551 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:22:56.551 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:22:56.551 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:22:56.551 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:56.551 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:22:56.551 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:22:56.551 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:56.551 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:56.551 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:22:56.551 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:56.551 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:22:56.551 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:56.551 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:22:56.551 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:56.551 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:56.551 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:56.551 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:56.551 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:56.551 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:56.551 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:22:56.551 Cannot find device "nvmf_init_br" 00:22:56.813 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:22:56.813 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:22:56.813 Cannot find device "nvmf_init_br2" 00:22:56.813 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:22:56.813 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:22:56.813 Cannot find device "nvmf_tgt_br" 00:22:56.813 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:22:56.813 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:22:56.813 Cannot find device "nvmf_tgt_br2" 00:22:56.813 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:22:56.813 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:22:56.813 Cannot find device "nvmf_init_br" 00:22:56.813 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:22:56.813 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:22:56.813 Cannot find device "nvmf_init_br2" 00:22:56.813 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:22:56.813 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:22:56.813 Cannot find device "nvmf_tgt_br" 00:22:56.813 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:22:56.813 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:22:56.813 Cannot find device "nvmf_tgt_br2" 00:22:56.813 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:22:56.813 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:22:56.813 Cannot find device "nvmf_br" 00:22:56.813 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:22:56.813 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:22:56.813 Cannot find device "nvmf_init_if" 00:22:56.813 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:22:56.813 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:22:56.813 Cannot find device "nvmf_init_if2" 00:22:56.813 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:22:56.813 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:56.813 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:56.813 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:22:56.813 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:56.813 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:56.813 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:22:56.814 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:22:56.814 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:56.814 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:22:56.814 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:56.814 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:57.073 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:57.073 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:57.073 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:57.073 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:22:57.073 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:22:57.073 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:22:57.073 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:22:57.073 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:22:57.073 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:22:57.073 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:22:57.073 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:22:57.073 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:22:57.073 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:57.073 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:57.073 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:57.073 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:22:57.073 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:22:57.073 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:22:57.073 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:22:57.073 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:57.073 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:57.333 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:57.333 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:22:57.333 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:22:57.333 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:22:57.333 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:57.333 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:57.333 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:22:57.333 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:57.333 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.153 ms 00:22:57.333 00:22:57.333 --- 10.0.0.3 ping statistics --- 00:22:57.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:57.333 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:22:57.333 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:22:57.333 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:57.333 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.084 ms 00:22:57.333 00:22:57.333 --- 10.0.0.4 ping statistics --- 00:22:57.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:57.333 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:22:57.333 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:57.333 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:57.333 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:22:57.333 00:22:57.333 --- 10.0.0.1 ping statistics --- 00:22:57.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:57.333 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:22:57.333 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:22:57.333 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:57.333 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 00:22:57.333 00:22:57.333 --- 10.0.0.2 ping statistics --- 00:22:57.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:57.333 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:22:57.333 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:57.333 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@461 -- # return 0 00:22:57.333 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:57.333 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:57.333 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:57.333 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:57.333 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:57.333 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:57.333 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:57.333 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:22:57.333 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:57.333 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:57.333 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:57.333 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:57.333 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=80550 00:22:57.333 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 80550 00:22:57.333 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@833 -- # '[' -z 80550 ']' 00:22:57.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:57.333 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:57.333 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:57.333 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:57.333 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:57.333 14:27:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:57.333 [2024-11-06 14:27:24.914990] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:22:57.333 [2024-11-06 14:27:24.915108] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:57.592 [2024-11-06 14:27:25.098374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:57.852 [2024-11-06 14:27:25.253355] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:57.852 [2024-11-06 14:27:25.253557] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:57.852 [2024-11-06 14:27:25.253586] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:57.852 [2024-11-06 14:27:25.253598] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:57.852 [2024-11-06 14:27:25.253611] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:57.852 [2024-11-06 14:27:25.256122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:57.852 [2024-11-06 14:27:25.256225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:57.852 [2024-11-06 14:27:25.256308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:57.852 [2024-11-06 14:27:25.256346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:58.111 [2024-11-06 14:27:25.505521] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:58.111 14:27:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:58.111 14:27:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@866 -- # return 0 00:22:58.111 14:27:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:58.111 14:27:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:58.111 14:27:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:58.371 14:27:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:58.371 14:27:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:22:58.371 14:27:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:22:58.630 14:27:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:22:58.630 14:27:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:22:58.889 14:27:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:22:58.889 14:27:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:59.148 14:27:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:22:59.148 14:27:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:22:59.148 14:27:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:22:59.148 14:27:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:22:59.148 14:27:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:59.407 [2024-11-06 14:27:26.893255] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:59.407 14:27:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:59.666 14:27:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:59.666 14:27:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:59.926 14:27:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:59.926 14:27:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:23:00.185 14:27:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:00.185 [2024-11-06 14:27:27.742208] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:00.185 14:27:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:23:00.444 14:27:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:23:00.444 14:27:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:23:00.444 14:27:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:23:00.444 14:27:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:23:01.820 Initializing NVMe Controllers 00:23:01.820 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:23:01.820 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:23:01.820 Initialization complete. Launching workers. 00:23:01.820 ======================================================== 00:23:01.820 Latency(us) 00:23:01.821 Device Information : IOPS MiB/s Average min max 00:23:01.821 PCIE (0000:00:10.0) NSID 1 from core 0: 16896.05 66.00 1893.24 666.11 8556.42 00:23:01.821 ======================================================== 00:23:01.821 Total : 16896.05 66.00 1893.24 666.11 8556.42 00:23:01.821 00:23:01.821 14:27:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:23:03.199 Initializing NVMe Controllers 00:23:03.199 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:23:03.199 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:03.199 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:03.199 Initialization complete. Launching workers. 00:23:03.199 ======================================================== 00:23:03.199 Latency(us) 00:23:03.199 Device Information : IOPS MiB/s Average min max 00:23:03.199 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2464.96 9.63 405.42 135.64 4286.41 00:23:03.199 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 125.00 0.49 8059.99 6044.93 12058.35 00:23:03.199 ======================================================== 00:23:03.199 Total : 2589.95 10.12 774.85 135.64 12058.35 00:23:03.199 00:23:03.199 14:27:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:23:04.575 Initializing NVMe Controllers 00:23:04.575 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:23:04.575 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:04.575 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:04.575 Initialization complete. Launching workers. 00:23:04.575 ======================================================== 00:23:04.575 Latency(us) 00:23:04.575 Device Information : IOPS MiB/s Average min max 00:23:04.575 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8051.98 31.45 3974.75 539.83 7891.82 00:23:04.575 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3958.99 15.46 8123.89 6523.63 15040.89 00:23:04.575 ======================================================== 00:23:04.575 Total : 12010.96 46.92 5342.37 539.83 15040.89 00:23:04.575 00:23:04.575 14:27:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:23:04.575 14:27:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:23:07.893 Initializing NVMe Controllers 00:23:07.893 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:23:07.893 Controller IO queue size 128, less than required. 00:23:07.893 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:07.893 Controller IO queue size 128, less than required. 00:23:07.893 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:07.893 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:07.893 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:07.893 Initialization complete. Launching workers. 00:23:07.893 ======================================================== 00:23:07.893 Latency(us) 00:23:07.893 Device Information : IOPS MiB/s Average min max 00:23:07.893 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1532.06 383.02 85998.28 41858.72 260094.53 00:23:07.893 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 610.23 152.56 233671.88 100054.92 566639.13 00:23:07.893 ======================================================== 00:23:07.893 Total : 2142.29 535.57 128062.88 41858.72 566639.13 00:23:07.893 00:23:07.893 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:23:08.153 Initializing NVMe Controllers 00:23:08.153 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:23:08.153 Controller IO queue size 128, less than required. 00:23:08.153 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:08.153 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:23:08.153 Controller IO queue size 128, less than required. 00:23:08.153 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:08.153 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:23:08.153 WARNING: Some requested NVMe devices were skipped 00:23:08.153 No valid NVMe controllers or AIO or URING devices found 00:23:08.153 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:23:11.445 Initializing NVMe Controllers 00:23:11.445 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:23:11.445 Controller IO queue size 128, less than required. 00:23:11.445 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:11.445 Controller IO queue size 128, less than required. 00:23:11.445 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:11.445 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:11.445 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:11.445 Initialization complete. Launching workers. 00:23:11.445 00:23:11.445 ==================== 00:23:11.445 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:23:11.445 TCP transport: 00:23:11.445 polls: 5730 00:23:11.445 idle_polls: 3339 00:23:11.445 sock_completions: 2391 00:23:11.445 nvme_completions: 4483 00:23:11.445 submitted_requests: 6688 00:23:11.445 queued_requests: 1 00:23:11.445 00:23:11.445 ==================== 00:23:11.445 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:23:11.445 TCP transport: 00:23:11.445 polls: 6768 00:23:11.445 idle_polls: 3510 00:23:11.445 sock_completions: 3258 00:23:11.445 nvme_completions: 5027 00:23:11.445 submitted_requests: 7542 00:23:11.445 queued_requests: 1 00:23:11.445 ======================================================== 00:23:11.445 Latency(us) 00:23:11.445 Device Information : IOPS MiB/s Average min max 00:23:11.445 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1119.75 279.94 118010.42 53260.46 333655.65 00:23:11.445 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1255.66 313.92 106988.38 62607.26 472729.04 00:23:11.445 ======================================================== 00:23:11.445 Total : 2375.42 593.85 112184.09 53260.46 472729.04 00:23:11.445 00:23:11.445 14:27:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:23:11.445 14:27:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:11.445 14:27:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:23:11.445 14:27:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:00:10.0 ']' 00:23:11.445 14:27:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:23:11.704 14:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=4e4d4d54-49b7-4e72-8d3a-d80b0f79793c 00:23:11.704 14:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 4e4d4d54-49b7-4e72-8d3a-d80b0f79793c 00:23:11.704 14:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local lvs_uuid=4e4d4d54-49b7-4e72-8d3a-d80b0f79793c 00:23:11.704 14:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local lvs_info 00:23:11.704 14:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local fc 00:23:11.704 14:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local cs 00:23:11.704 14:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:11.963 14:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # lvs_info='[ 00:23:11.963 { 00:23:11.963 "uuid": "4e4d4d54-49b7-4e72-8d3a-d80b0f79793c", 00:23:11.963 "name": "lvs_0", 00:23:11.963 "base_bdev": "Nvme0n1", 00:23:11.963 "total_data_clusters": 1278, 00:23:11.963 "free_clusters": 1278, 00:23:11.963 "block_size": 4096, 00:23:11.963 "cluster_size": 4194304 00:23:11.963 } 00:23:11.963 ]' 00:23:11.963 14:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # jq '.[] | select(.uuid=="4e4d4d54-49b7-4e72-8d3a-d80b0f79793c") .free_clusters' 00:23:11.963 14:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # fc=1278 00:23:11.963 14:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # jq '.[] | select(.uuid=="4e4d4d54-49b7-4e72-8d3a-d80b0f79793c") .cluster_size' 00:23:11.963 14:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # cs=4194304 00:23:11.963 5112 00:23:11.963 14:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1375 -- # free_mb=5112 00:23:11.963 14:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1376 -- # echo 5112 00:23:11.963 14:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:23:11.963 14:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 4e4d4d54-49b7-4e72-8d3a-d80b0f79793c lbd_0 5112 00:23:12.221 14:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=7b3d1247-394d-45ce-af29-8c1930ae53ce 00:23:12.221 14:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore 7b3d1247-394d-45ce-af29-8c1930ae53ce lvs_n_0 00:23:12.479 14:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=8ab7b5a5-0126-43ea-abcf-ab958b388eca 00:23:12.479 14:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 8ab7b5a5-0126-43ea-abcf-ab958b388eca 00:23:12.479 14:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local lvs_uuid=8ab7b5a5-0126-43ea-abcf-ab958b388eca 00:23:12.479 14:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local lvs_info 00:23:12.479 14:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local fc 00:23:12.479 14:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local cs 00:23:12.479 14:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:12.739 14:27:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # lvs_info='[ 00:23:12.739 { 00:23:12.739 "uuid": "4e4d4d54-49b7-4e72-8d3a-d80b0f79793c", 00:23:12.739 "name": "lvs_0", 00:23:12.739 "base_bdev": "Nvme0n1", 00:23:12.739 "total_data_clusters": 1278, 00:23:12.739 "free_clusters": 0, 00:23:12.739 "block_size": 4096, 00:23:12.739 "cluster_size": 4194304 00:23:12.739 }, 00:23:12.739 { 00:23:12.739 "uuid": "8ab7b5a5-0126-43ea-abcf-ab958b388eca", 00:23:12.739 "name": "lvs_n_0", 00:23:12.739 "base_bdev": "7b3d1247-394d-45ce-af29-8c1930ae53ce", 00:23:12.739 "total_data_clusters": 1276, 00:23:12.739 "free_clusters": 1276, 00:23:12.739 "block_size": 4096, 00:23:12.739 "cluster_size": 4194304 00:23:12.739 } 00:23:12.739 ]' 00:23:12.739 14:27:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # jq '.[] | select(.uuid=="8ab7b5a5-0126-43ea-abcf-ab958b388eca") .free_clusters' 00:23:12.739 14:27:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # fc=1276 00:23:12.739 14:27:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # jq '.[] | select(.uuid=="8ab7b5a5-0126-43ea-abcf-ab958b388eca") .cluster_size' 00:23:12.739 14:27:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # cs=4194304 00:23:12.739 14:27:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1375 -- # free_mb=5104 00:23:12.739 5104 00:23:12.739 14:27:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1376 -- # echo 5104 00:23:12.739 14:27:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:23:12.739 14:27:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 8ab7b5a5-0126-43ea-abcf-ab958b388eca lbd_nest_0 5104 00:23:12.998 14:27:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=8a8fa1e5-4bdf-43b2-aeed-4868027a78d4 00:23:12.998 14:27:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:13.257 14:27:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:23:13.257 14:27:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 8a8fa1e5-4bdf-43b2-aeed-4868027a78d4 00:23:13.516 14:27:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:13.516 14:27:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:23:13.516 14:27:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:23:13.516 14:27:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:23:13.516 14:27:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:23:13.516 14:27:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:23:14.084 Initializing NVMe Controllers 00:23:14.084 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:23:14.084 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:23:14.084 WARNING: Some requested NVMe devices were skipped 00:23:14.084 No valid NVMe controllers or AIO or URING devices found 00:23:14.084 14:27:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:23:14.084 14:27:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:23:26.296 Initializing NVMe Controllers 00:23:26.296 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:23:26.296 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:26.296 Initialization complete. Launching workers. 00:23:26.296 ======================================================== 00:23:26.296 Latency(us) 00:23:26.296 Device Information : IOPS MiB/s Average min max 00:23:26.296 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 808.70 101.09 1235.64 373.26 7525.68 00:23:26.296 ======================================================== 00:23:26.296 Total : 808.70 101.09 1235.64 373.26 7525.68 00:23:26.296 00:23:26.296 14:27:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:23:26.296 14:27:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:23:26.296 14:27:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:23:26.296 Initializing NVMe Controllers 00:23:26.296 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:23:26.296 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:23:26.296 WARNING: Some requested NVMe devices were skipped 00:23:26.296 No valid NVMe controllers or AIO or URING devices found 00:23:26.296 14:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:23:26.296 14:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:23:36.307 Initializing NVMe Controllers 00:23:36.307 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:23:36.307 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:36.307 Initialization complete. Launching workers. 00:23:36.307 ======================================================== 00:23:36.307 Latency(us) 00:23:36.307 Device Information : IOPS MiB/s Average min max 00:23:36.307 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1338.00 167.25 23941.37 5116.90 59961.24 00:23:36.307 ======================================================== 00:23:36.307 Total : 1338.00 167.25 23941.37 5116.90 59961.24 00:23:36.307 00:23:36.307 14:28:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:23:36.307 14:28:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:23:36.307 14:28:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:23:36.307 Initializing NVMe Controllers 00:23:36.307 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:23:36.307 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:23:36.307 WARNING: Some requested NVMe devices were skipped 00:23:36.307 No valid NVMe controllers or AIO or URING devices found 00:23:36.307 14:28:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:23:36.307 14:28:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:23:46.292 Initializing NVMe Controllers 00:23:46.292 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:23:46.292 Controller IO queue size 128, less than required. 00:23:46.292 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:46.292 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:46.292 Initialization complete. Launching workers. 00:23:46.292 ======================================================== 00:23:46.292 Latency(us) 00:23:46.292 Device Information : IOPS MiB/s Average min max 00:23:46.292 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3349.40 418.67 38279.64 7787.64 112816.66 00:23:46.292 ======================================================== 00:23:46.292 Total : 3349.40 418.67 38279.64 7787.64 112816.66 00:23:46.292 00:23:46.292 14:28:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:46.292 14:28:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 8a8fa1e5-4bdf-43b2-aeed-4868027a78d4 00:23:46.861 14:28:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:23:47.120 14:28:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 7b3d1247-394d-45ce-af29-8c1930ae53ce 00:23:47.379 14:28:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:23:47.379 14:28:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:23:47.379 14:28:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:23:47.379 14:28:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:47.379 14:28:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:23:47.654 14:28:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:47.654 14:28:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:23:47.654 14:28:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:47.654 14:28:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:47.654 rmmod nvme_tcp 00:23:47.654 rmmod nvme_fabrics 00:23:47.654 rmmod nvme_keyring 00:23:47.654 14:28:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:47.654 14:28:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:23:47.654 14:28:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:23:47.654 14:28:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 80550 ']' 00:23:47.654 14:28:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 80550 00:23:47.654 14:28:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@952 -- # '[' -z 80550 ']' 00:23:47.654 14:28:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # kill -0 80550 00:23:47.654 14:28:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # uname 00:23:47.654 14:28:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:47.654 14:28:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80550 00:23:47.654 killing process with pid 80550 00:23:47.654 14:28:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:47.654 14:28:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:47.654 14:28:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80550' 00:23:47.654 14:28:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@971 -- # kill 80550 00:23:47.654 14:28:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@976 -- # wait 80550 00:23:50.191 14:28:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:50.191 14:28:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:50.191 14:28:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:50.191 14:28:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:23:50.191 14:28:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:23:50.191 14:28:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:23:50.191 14:28:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:50.191 14:28:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:50.191 14:28:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:50.191 14:28:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:50.191 14:28:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:50.191 14:28:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:50.191 14:28:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:50.191 14:28:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:50.191 14:28:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:50.191 14:28:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:50.191 14:28:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:50.191 14:28:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:50.191 14:28:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:50.191 14:28:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:50.191 14:28:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:50.191 14:28:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:50.191 14:28:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:50.191 14:28:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:50.191 14:28:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:50.191 14:28:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:50.191 14:28:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:23:50.191 00:23:50.191 real 0m53.870s 00:23:50.191 user 3m19.411s 00:23:50.191 sys 0m14.132s 00:23:50.191 14:28:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:50.191 14:28:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:50.191 ************************************ 00:23:50.191 END TEST nvmf_perf 00:23:50.191 ************************************ 00:23:50.191 14:28:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:50.191 14:28:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:50.191 14:28:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:50.191 14:28:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.191 ************************************ 00:23:50.191 START TEST nvmf_fio_host 00:23:50.191 ************************************ 00:23:50.191 14:28:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:50.451 * Looking for test storage... 00:23:50.451 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:50.451 14:28:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:50.451 14:28:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lcov --version 00:23:50.451 14:28:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:50.451 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:50.451 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:50.451 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:50.451 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:50.451 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:23:50.451 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:23:50.451 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:23:50.451 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:23:50.451 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:23:50.451 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:23:50.451 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:23:50.451 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:50.451 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:23:50.451 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:23:50.451 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:50.451 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:50.451 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:23:50.451 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:23:50.451 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:50.451 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:23:50.451 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:23:50.451 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:23:50.451 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:23:50.451 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:50.451 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:23:50.451 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:23:50.451 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:50.451 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:50.451 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:23:50.451 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:50.451 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:50.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:50.451 --rc genhtml_branch_coverage=1 00:23:50.451 --rc genhtml_function_coverage=1 00:23:50.451 --rc genhtml_legend=1 00:23:50.451 --rc geninfo_all_blocks=1 00:23:50.451 --rc geninfo_unexecuted_blocks=1 00:23:50.451 00:23:50.451 ' 00:23:50.451 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:50.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:50.451 --rc genhtml_branch_coverage=1 00:23:50.451 --rc genhtml_function_coverage=1 00:23:50.451 --rc genhtml_legend=1 00:23:50.451 --rc geninfo_all_blocks=1 00:23:50.451 --rc geninfo_unexecuted_blocks=1 00:23:50.451 00:23:50.451 ' 00:23:50.451 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:50.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:50.451 --rc genhtml_branch_coverage=1 00:23:50.451 --rc genhtml_function_coverage=1 00:23:50.451 --rc genhtml_legend=1 00:23:50.451 --rc geninfo_all_blocks=1 00:23:50.451 --rc geninfo_unexecuted_blocks=1 00:23:50.451 00:23:50.451 ' 00:23:50.451 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:50.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:50.451 --rc genhtml_branch_coverage=1 00:23:50.451 --rc genhtml_function_coverage=1 00:23:50.451 --rc genhtml_legend=1 00:23:50.451 --rc geninfo_all_blocks=1 00:23:50.451 --rc geninfo_unexecuted_blocks=1 00:23:50.451 00:23:50.451 ' 00:23:50.451 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:50.451 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:50.451 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:50.451 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:50.451 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:50.451 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.452 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.452 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.452 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:50.452 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.452 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:50.452 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:23:50.452 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:50.452 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:50.452 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:50.452 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:50.452 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:50.452 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:50.452 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:50.452 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:50.452 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:50.452 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:50.452 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:23:50.452 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:23:50.452 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:50.452 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:50.452 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:50.452 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:50.452 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:50.452 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:50.714 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:50.714 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:50.714 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:50.714 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.714 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.714 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.714 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:50.714 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.714 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:23:50.714 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:50.714 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:50.714 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:50.714 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:50.714 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:50.714 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:50.714 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:50.714 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:50.714 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:50.714 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:50.714 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:50.714 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:23:50.714 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:50.715 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:50.715 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:50.715 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:50.715 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:50.715 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:50.715 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:50.715 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:50.715 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:23:50.715 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:23:50.715 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:23:50.715 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:23:50.715 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:23:50.715 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:23:50.715 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:50.715 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:50.715 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:50.715 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:50.715 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:50.715 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:50.715 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:50.715 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:50.715 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:50.715 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:50.715 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:50.715 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:50.715 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:50.715 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:50.715 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:50.715 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:50.715 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:50.715 Cannot find device "nvmf_init_br" 00:23:50.715 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:23:50.715 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:50.715 Cannot find device "nvmf_init_br2" 00:23:50.715 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:23:50.715 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:50.715 Cannot find device "nvmf_tgt_br" 00:23:50.715 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:23:50.715 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:50.715 Cannot find device "nvmf_tgt_br2" 00:23:50.715 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:23:50.715 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:50.715 Cannot find device "nvmf_init_br" 00:23:50.715 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:23:50.715 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:50.715 Cannot find device "nvmf_init_br2" 00:23:50.715 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:23:50.715 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:50.715 Cannot find device "nvmf_tgt_br" 00:23:50.715 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:23:50.715 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:50.715 Cannot find device "nvmf_tgt_br2" 00:23:50.715 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:23:50.715 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:50.715 Cannot find device "nvmf_br" 00:23:50.715 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:23:50.715 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:50.715 Cannot find device "nvmf_init_if" 00:23:50.715 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:23:50.715 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:50.715 Cannot find device "nvmf_init_if2" 00:23:50.715 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:23:50.715 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:50.715 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:50.716 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:23:50.716 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:50.716 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:50.716 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:23:50.716 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:50.716 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:50.716 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:50.716 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:50.976 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:50.976 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:50.976 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:50.976 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:50.976 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:50.976 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:50.976 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:50.976 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:50.976 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:50.976 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:50.976 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:50.976 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:50.976 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:50.976 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:50.976 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:50.976 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:50.976 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:50.976 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:50.976 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:50.976 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:50.976 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:50.976 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:50.976 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:50.976 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:50.976 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:50.976 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:50.976 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:50.976 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:50.976 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:51.236 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:51.236 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.116 ms 00:23:51.236 00:23:51.236 --- 10.0.0.3 ping statistics --- 00:23:51.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:51.236 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:23:51.236 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:51.236 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:51.236 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.069 ms 00:23:51.236 00:23:51.236 --- 10.0.0.4 ping statistics --- 00:23:51.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:51.236 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:23:51.236 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:51.236 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:51.236 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:23:51.236 00:23:51.236 --- 10.0.0.1 ping statistics --- 00:23:51.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:51.236 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:23:51.236 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:51.236 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:51.236 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.097 ms 00:23:51.236 00:23:51.236 --- 10.0.0.2 ping statistics --- 00:23:51.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:51.236 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:23:51.236 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:51.236 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@461 -- # return 0 00:23:51.236 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:51.236 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:51.236 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:51.236 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:51.236 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:51.236 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:51.236 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:51.236 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:23:51.236 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:23:51.236 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:51.236 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.236 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=81456 00:23:51.236 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:51.236 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:51.236 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 81456 00:23:51.236 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@833 -- # '[' -z 81456 ']' 00:23:51.236 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:51.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:51.236 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:51.236 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:51.236 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:51.236 14:28:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.236 [2024-11-06 14:28:18.794753] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:23:51.236 [2024-11-06 14:28:18.794883] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:51.495 [2024-11-06 14:28:18.982883] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:51.754 [2024-11-06 14:28:19.132199] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:51.754 [2024-11-06 14:28:19.132261] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:51.754 [2024-11-06 14:28:19.132277] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:51.754 [2024-11-06 14:28:19.132289] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:51.754 [2024-11-06 14:28:19.132301] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:51.754 [2024-11-06 14:28:19.134657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:51.754 [2024-11-06 14:28:19.134809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:51.754 [2024-11-06 14:28:19.134922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:51.754 [2024-11-06 14:28:19.135392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:51.754 [2024-11-06 14:28:19.385613] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:52.013 14:28:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:52.013 14:28:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@866 -- # return 0 00:23:52.013 14:28:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:52.272 [2024-11-06 14:28:19.814424] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:52.272 14:28:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:23:52.272 14:28:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:52.272 14:28:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.531 14:28:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:23:52.790 Malloc1 00:23:52.790 14:28:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:53.049 14:28:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:53.049 14:28:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:53.308 [2024-11-06 14:28:20.841307] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:53.308 14:28:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:23:53.567 14:28:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:23:53.567 14:28:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:23:53.567 14:28:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:23:53.567 14:28:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:23:53.567 14:28:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:53.567 14:28:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:23:53.567 14:28:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:23:53.567 14:28:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:23:53.567 14:28:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:23:53.567 14:28:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:23:53.567 14:28:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:23:53.567 14:28:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:23:53.567 14:28:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:23:53.567 14:28:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:23:53.567 14:28:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:23:53.567 14:28:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # break 00:23:53.567 14:28:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:23:53.567 14:28:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:23:53.826 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:23:53.826 fio-3.35 00:23:53.826 Starting 1 thread 00:23:56.362 00:23:56.362 test: (groupid=0, jobs=1): err= 0: pid=81531: Wed Nov 6 14:28:23 2024 00:23:56.362 read: IOPS=8431, BW=32.9MiB/s (34.5MB/s)(66.1MiB/2008msec) 00:23:56.362 slat (nsec): min=1734, max=511659, avg=2033.97, stdev=4810.96 00:23:56.362 clat (usec): min=3720, max=15410, avg=7925.93, stdev=626.18 00:23:56.362 lat (usec): min=3770, max=15412, avg=7927.96, stdev=625.96 00:23:56.362 clat percentiles (usec): 00:23:56.362 | 1.00th=[ 6587], 5.00th=[ 7046], 10.00th=[ 7242], 20.00th=[ 7439], 00:23:56.362 | 30.00th=[ 7635], 40.00th=[ 7767], 50.00th=[ 7898], 60.00th=[ 8029], 00:23:56.362 | 70.00th=[ 8160], 80.00th=[ 8356], 90.00th=[ 8586], 95.00th=[ 8848], 00:23:56.362 | 99.00th=[ 9503], 99.50th=[ 9896], 99.90th=[13566], 99.95th=[14746], 00:23:56.362 | 99.99th=[15008] 00:23:56.363 bw ( KiB/s): min=33296, max=34048, per=99.95%, avg=33707.00, stdev=320.66, samples=4 00:23:56.363 iops : min= 8324, max= 8512, avg=8426.75, stdev=80.16, samples=4 00:23:56.363 write: IOPS=8424, BW=32.9MiB/s (34.5MB/s)(66.1MiB/2008msec); 0 zone resets 00:23:56.363 slat (nsec): min=1799, max=471766, avg=2115.31, stdev=4650.08 00:23:56.363 clat (usec): min=3490, max=14735, avg=7189.37, stdev=576.48 00:23:56.363 lat (usec): min=3511, max=14737, avg=7191.49, stdev=576.46 00:23:56.363 clat percentiles (usec): 00:23:56.363 | 1.00th=[ 5997], 5.00th=[ 6390], 10.00th=[ 6587], 20.00th=[ 6783], 00:23:56.363 | 30.00th=[ 6915], 40.00th=[ 7046], 50.00th=[ 7177], 60.00th=[ 7308], 00:23:56.363 | 70.00th=[ 7373], 80.00th=[ 7570], 90.00th=[ 7832], 95.00th=[ 8029], 00:23:56.363 | 99.00th=[ 8586], 99.50th=[ 8979], 99.90th=[12387], 99.95th=[13698], 00:23:56.363 | 99.99th=[14746] 00:23:56.363 bw ( KiB/s): min=33149, max=34568, per=100.00%, avg=33697.25, stdev=662.22, samples=4 00:23:56.363 iops : min= 8287, max= 8642, avg=8424.25, stdev=165.62, samples=4 00:23:56.363 lat (msec) : 4=0.02%, 10=99.65%, 20=0.33% 00:23:56.363 cpu : usr=71.70%, sys=22.82%, ctx=23, majf=0, minf=1556 00:23:56.363 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:23:56.363 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.363 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:56.363 issued rwts: total=16930,16916,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.363 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:56.363 00:23:56.363 Run status group 0 (all jobs): 00:23:56.363 READ: bw=32.9MiB/s (34.5MB/s), 32.9MiB/s-32.9MiB/s (34.5MB/s-34.5MB/s), io=66.1MiB (69.3MB), run=2008-2008msec 00:23:56.363 WRITE: bw=32.9MiB/s (34.5MB/s), 32.9MiB/s-32.9MiB/s (34.5MB/s-34.5MB/s), io=66.1MiB (69.3MB), run=2008-2008msec 00:23:56.363 ----------------------------------------------------- 00:23:56.363 Suppressions used: 00:23:56.363 count bytes template 00:23:56.363 1 57 /usr/src/fio/parse.c 00:23:56.363 1 8 libtcmalloc_minimal.so 00:23:56.363 ----------------------------------------------------- 00:23:56.363 00:23:56.363 14:28:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:23:56.363 14:28:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:23:56.363 14:28:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:23:56.363 14:28:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:56.363 14:28:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:23:56.363 14:28:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:23:56.363 14:28:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:23:56.363 14:28:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:23:56.363 14:28:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:23:56.363 14:28:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:23:56.363 14:28:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:23:56.363 14:28:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:23:56.363 14:28:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:23:56.363 14:28:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:23:56.363 14:28:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # break 00:23:56.363 14:28:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:23:56.363 14:28:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:23:56.633 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:23:56.633 fio-3.35 00:23:56.633 Starting 1 thread 00:23:59.167 00:23:59.167 test: (groupid=0, jobs=1): err= 0: pid=81568: Wed Nov 6 14:28:26 2024 00:23:59.167 read: IOPS=7368, BW=115MiB/s (121MB/s)(231MiB/2006msec) 00:23:59.167 slat (usec): min=2, max=121, avg= 3.29, stdev= 1.86 00:23:59.167 clat (usec): min=1081, max=32423, avg=10106.68, stdev=2820.46 00:23:59.167 lat (usec): min=1084, max=32426, avg=10109.98, stdev=2820.52 00:23:59.167 clat percentiles (usec): 00:23:59.167 | 1.00th=[ 4621], 5.00th=[ 5669], 10.00th=[ 6587], 20.00th=[ 7767], 00:23:59.167 | 30.00th=[ 8455], 40.00th=[ 9241], 50.00th=[10159], 60.00th=[10683], 00:23:59.167 | 70.00th=[11469], 80.00th=[12387], 90.00th=[13435], 95.00th=[14877], 00:23:59.167 | 99.00th=[17957], 99.50th=[19792], 99.90th=[22414], 99.95th=[22676], 00:23:59.167 | 99.99th=[27657] 00:23:59.167 bw ( KiB/s): min=57920, max=59616, per=49.83%, avg=58752.00, stdev=692.76, samples=4 00:23:59.167 iops : min= 3620, max= 3726, avg=3672.00, stdev=43.30, samples=4 00:23:59.167 write: IOPS=4203, BW=65.7MiB/s (68.9MB/s)(121MiB/1838msec); 0 zone resets 00:23:59.167 slat (usec): min=29, max=158, avg=32.56, stdev= 8.07 00:23:59.167 clat (usec): min=6548, max=37352, avg=13037.43, stdev=3571.33 00:23:59.167 lat (usec): min=6580, max=37387, avg=13069.98, stdev=3572.29 00:23:59.167 clat percentiles (usec): 00:23:59.167 | 1.00th=[ 7898], 5.00th=[ 8848], 10.00th=[ 9503], 20.00th=[10290], 00:23:59.167 | 30.00th=[11076], 40.00th=[11863], 50.00th=[12387], 60.00th=[13304], 00:23:59.167 | 70.00th=[14091], 80.00th=[15139], 90.00th=[16712], 95.00th=[18482], 00:23:59.167 | 99.00th=[28181], 99.50th=[31851], 99.90th=[35914], 99.95th=[36963], 00:23:59.167 | 99.99th=[37487] 00:23:59.167 bw ( KiB/s): min=60736, max=62112, per=91.15%, avg=61304.00, stdev=615.19, samples=4 00:23:59.167 iops : min= 3796, max= 3882, avg=3831.50, stdev=38.45, samples=4 00:23:59.167 lat (msec) : 2=0.01%, 4=0.14%, 10=37.00%, 20=61.28%, 50=1.58% 00:23:59.167 cpu : usr=80.36%, sys=16.25%, ctx=5, majf=0, minf=2217 00:23:59.167 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:23:59.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:59.167 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:59.167 issued rwts: total=14782,7726,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:59.167 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:59.167 00:23:59.167 Run status group 0 (all jobs): 00:23:59.167 READ: bw=115MiB/s (121MB/s), 115MiB/s-115MiB/s (121MB/s-121MB/s), io=231MiB (242MB), run=2006-2006msec 00:23:59.167 WRITE: bw=65.7MiB/s (68.9MB/s), 65.7MiB/s-65.7MiB/s (68.9MB/s-68.9MB/s), io=121MiB (127MB), run=1838-1838msec 00:23:59.167 ----------------------------------------------------- 00:23:59.167 Suppressions used: 00:23:59.167 count bytes template 00:23:59.167 1 57 /usr/src/fio/parse.c 00:23:59.167 345 33120 /usr/src/fio/iolog.c 00:23:59.167 1 8 libtcmalloc_minimal.so 00:23:59.167 ----------------------------------------------------- 00:23:59.167 00:23:59.167 14:28:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:59.425 14:28:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:23:59.425 14:28:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:23:59.425 14:28:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:23:59.425 14:28:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1496 -- # bdfs=() 00:23:59.425 14:28:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1496 -- # local bdfs 00:23:59.425 14:28:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:23:59.425 14:28:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:23:59.426 14:28:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:23:59.426 14:28:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:23:59.426 14:28:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:23:59.426 14:28:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 -i 10.0.0.3 00:23:59.992 Nvme0n1 00:23:59.992 14:28:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:23:59.992 14:28:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=2319d0ea-9417-4bf7-ab56-8dbd1c6cffe8 00:23:59.992 14:28:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 2319d0ea-9417-4bf7-ab56-8dbd1c6cffe8 00:23:59.992 14:28:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local lvs_uuid=2319d0ea-9417-4bf7-ab56-8dbd1c6cffe8 00:23:59.992 14:28:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local lvs_info 00:23:59.992 14:28:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local fc 00:23:59.992 14:28:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local cs 00:23:59.992 14:28:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:00.250 14:28:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # lvs_info='[ 00:24:00.250 { 00:24:00.250 "uuid": "2319d0ea-9417-4bf7-ab56-8dbd1c6cffe8", 00:24:00.250 "name": "lvs_0", 00:24:00.250 "base_bdev": "Nvme0n1", 00:24:00.250 "total_data_clusters": 4, 00:24:00.250 "free_clusters": 4, 00:24:00.250 "block_size": 4096, 00:24:00.250 "cluster_size": 1073741824 00:24:00.250 } 00:24:00.250 ]' 00:24:00.250 14:28:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # jq '.[] | select(.uuid=="2319d0ea-9417-4bf7-ab56-8dbd1c6cffe8") .free_clusters' 00:24:00.250 14:28:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # fc=4 00:24:00.250 14:28:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # jq '.[] | select(.uuid=="2319d0ea-9417-4bf7-ab56-8dbd1c6cffe8") .cluster_size' 00:24:00.250 14:28:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # cs=1073741824 00:24:00.250 4096 00:24:00.250 14:28:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1375 -- # free_mb=4096 00:24:00.250 14:28:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1376 -- # echo 4096 00:24:00.250 14:28:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:24:00.508 6895979f-1065-45fc-a4a1-25f1521786e6 00:24:00.508 14:28:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:24:00.767 14:28:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:24:01.025 14:28:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:24:01.283 14:28:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:24:01.283 14:28:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:24:01.283 14:28:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:24:01.283 14:28:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:01.283 14:28:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:24:01.283 14:28:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:24:01.283 14:28:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:24:01.283 14:28:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:24:01.283 14:28:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:24:01.283 14:28:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:24:01.283 14:28:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:24:01.283 14:28:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:24:01.283 14:28:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:24:01.283 14:28:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:24:01.283 14:28:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # break 00:24:01.283 14:28:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:24:01.283 14:28:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:24:01.283 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:01.283 fio-3.35 00:24:01.283 Starting 1 thread 00:24:03.813 00:24:03.814 test: (groupid=0, jobs=1): err= 0: pid=81671: Wed Nov 6 14:28:31 2024 00:24:03.814 read: IOPS=5603, BW=21.9MiB/s (23.0MB/s)(44.0MiB/2009msec) 00:24:03.814 slat (nsec): min=1749, max=408360, avg=2269.22, stdev=5146.69 00:24:03.814 clat (usec): min=3696, max=21427, avg=11992.07, stdev=1016.78 00:24:03.814 lat (usec): min=3707, max=21429, avg=11994.34, stdev=1016.32 00:24:03.814 clat percentiles (usec): 00:24:03.814 | 1.00th=[ 9765], 5.00th=[10552], 10.00th=[10814], 20.00th=[11207], 00:24:03.814 | 30.00th=[11469], 40.00th=[11731], 50.00th=[11994], 60.00th=[12256], 00:24:03.814 | 70.00th=[12518], 80.00th=[12780], 90.00th=[13173], 95.00th=[13566], 00:24:03.814 | 99.00th=[14222], 99.50th=[14877], 99.90th=[20317], 99.95th=[21103], 00:24:03.814 | 99.99th=[21365] 00:24:03.814 bw ( KiB/s): min=21320, max=22776, per=99.72%, avg=22350.50, stdev=693.43, samples=4 00:24:03.814 iops : min= 5330, max= 5694, avg=5587.50, stdev=173.31, samples=4 00:24:03.814 write: IOPS=5564, BW=21.7MiB/s (22.8MB/s)(43.7MiB/2009msec); 0 zone resets 00:24:03.814 slat (nsec): min=1801, max=297642, avg=2383.70, stdev=3445.18 00:24:03.814 clat (usec): min=3146, max=19573, avg=10807.90, stdev=928.02 00:24:03.814 lat (usec): min=3164, max=19575, avg=10810.28, stdev=927.82 00:24:03.814 clat percentiles (usec): 00:24:03.814 | 1.00th=[ 8848], 5.00th=[ 9503], 10.00th=[ 9765], 20.00th=[10028], 00:24:03.814 | 30.00th=[10290], 40.00th=[10552], 50.00th=[10814], 60.00th=[11076], 00:24:03.814 | 70.00th=[11207], 80.00th=[11469], 90.00th=[11863], 95.00th=[12125], 00:24:03.814 | 99.00th=[12780], 99.50th=[13042], 99.90th=[17171], 99.95th=[19268], 00:24:03.814 | 99.99th=[19530] 00:24:03.814 bw ( KiB/s): min=22080, max=22546, per=99.94%, avg=22246.50, stdev=214.34, samples=4 00:24:03.814 iops : min= 5520, max= 5636, avg=5561.50, stdev=53.35, samples=4 00:24:03.814 lat (msec) : 4=0.02%, 10=9.24%, 20=90.68%, 50=0.06% 00:24:03.814 cpu : usr=73.21%, sys=22.86%, ctx=5, majf=0, minf=1556 00:24:03.814 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:24:03.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:03.814 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:03.814 issued rwts: total=11257,11180,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:03.814 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:03.814 00:24:03.814 Run status group 0 (all jobs): 00:24:03.814 READ: bw=21.9MiB/s (23.0MB/s), 21.9MiB/s-21.9MiB/s (23.0MB/s-23.0MB/s), io=44.0MiB (46.1MB), run=2009-2009msec 00:24:03.814 WRITE: bw=21.7MiB/s (22.8MB/s), 21.7MiB/s-21.7MiB/s (22.8MB/s-22.8MB/s), io=43.7MiB (45.8MB), run=2009-2009msec 00:24:03.814 ----------------------------------------------------- 00:24:03.814 Suppressions used: 00:24:03.814 count bytes template 00:24:03.814 1 58 /usr/src/fio/parse.c 00:24:03.814 1 8 libtcmalloc_minimal.so 00:24:03.814 ----------------------------------------------------- 00:24:03.814 00:24:04.072 14:28:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:04.072 14:28:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:24:04.330 14:28:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=60efed04-cfdf-49f7-9e0e-9299c0e109c2 00:24:04.330 14:28:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 60efed04-cfdf-49f7-9e0e-9299c0e109c2 00:24:04.330 14:28:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local lvs_uuid=60efed04-cfdf-49f7-9e0e-9299c0e109c2 00:24:04.330 14:28:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local lvs_info 00:24:04.330 14:28:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local fc 00:24:04.330 14:28:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local cs 00:24:04.330 14:28:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:04.589 14:28:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # lvs_info='[ 00:24:04.589 { 00:24:04.589 "uuid": "2319d0ea-9417-4bf7-ab56-8dbd1c6cffe8", 00:24:04.589 "name": "lvs_0", 00:24:04.589 "base_bdev": "Nvme0n1", 00:24:04.589 "total_data_clusters": 4, 00:24:04.589 "free_clusters": 0, 00:24:04.589 "block_size": 4096, 00:24:04.589 "cluster_size": 1073741824 00:24:04.589 }, 00:24:04.589 { 00:24:04.589 "uuid": "60efed04-cfdf-49f7-9e0e-9299c0e109c2", 00:24:04.589 "name": "lvs_n_0", 00:24:04.589 "base_bdev": "6895979f-1065-45fc-a4a1-25f1521786e6", 00:24:04.589 "total_data_clusters": 1022, 00:24:04.589 "free_clusters": 1022, 00:24:04.589 "block_size": 4096, 00:24:04.589 "cluster_size": 4194304 00:24:04.589 } 00:24:04.589 ]' 00:24:04.589 14:28:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # jq '.[] | select(.uuid=="60efed04-cfdf-49f7-9e0e-9299c0e109c2") .free_clusters' 00:24:04.589 14:28:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # fc=1022 00:24:04.589 14:28:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # jq '.[] | select(.uuid=="60efed04-cfdf-49f7-9e0e-9299c0e109c2") .cluster_size' 00:24:04.589 4088 00:24:04.589 14:28:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # cs=4194304 00:24:04.589 14:28:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1375 -- # free_mb=4088 00:24:04.589 14:28:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1376 -- # echo 4088 00:24:04.589 14:28:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:24:04.848 40ba5728-e1df-42a5-9f3d-1529bd250357 00:24:04.848 14:28:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:24:05.107 14:28:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:24:05.366 14:28:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.3 -s 4420 00:24:05.651 14:28:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:24:05.651 14:28:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:24:05.651 14:28:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:24:05.651 14:28:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:05.651 14:28:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:24:05.651 14:28:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:24:05.651 14:28:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:24:05.651 14:28:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:24:05.651 14:28:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:24:05.651 14:28:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:24:05.651 14:28:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:24:05.651 14:28:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:24:05.651 14:28:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:24:05.651 14:28:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:24:05.651 14:28:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # break 00:24:05.651 14:28:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:24:05.651 14:28:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:24:05.651 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:05.651 fio-3.35 00:24:05.651 Starting 1 thread 00:24:08.195 00:24:08.195 test: (groupid=0, jobs=1): err= 0: pid=81742: Wed Nov 6 14:28:35 2024 00:24:08.195 read: IOPS=5422, BW=21.2MiB/s (22.2MB/s)(42.6MiB/2009msec) 00:24:08.195 slat (nsec): min=1723, max=443665, avg=2124.40, stdev=5544.98 00:24:08.195 clat (usec): min=3952, max=28883, avg=12389.98, stdev=2683.89 00:24:08.195 lat (usec): min=3964, max=28885, avg=12392.11, stdev=2683.67 00:24:08.195 clat percentiles (usec): 00:24:08.195 | 1.00th=[ 9372], 5.00th=[10159], 10.00th=[10552], 20.00th=[10945], 00:24:08.195 | 30.00th=[11207], 40.00th=[11600], 50.00th=[11863], 60.00th=[12125], 00:24:08.195 | 70.00th=[12387], 80.00th=[12911], 90.00th=[13829], 95.00th=[19792], 00:24:08.195 | 99.00th=[23725], 99.50th=[25035], 99.90th=[28443], 99.95th=[28443], 00:24:08.195 | 99.99th=[28705] 00:24:08.195 bw ( KiB/s): min=20120, max=23424, per=99.87%, avg=21660.00, stdev=1729.36, samples=4 00:24:08.195 iops : min= 5030, max= 5856, avg=5415.00, stdev=432.34, samples=4 00:24:08.195 write: IOPS=5404, BW=21.1MiB/s (22.1MB/s)(42.4MiB/2009msec); 0 zone resets 00:24:08.195 slat (nsec): min=1780, max=399669, avg=2206.08, stdev=4443.32 00:24:08.195 clat (usec): min=3747, max=26791, avg=11132.27, stdev=2409.40 00:24:08.195 lat (usec): min=3766, max=26793, avg=11134.48, stdev=2409.30 00:24:08.195 clat percentiles (usec): 00:24:08.195 | 1.00th=[ 8455], 5.00th=[ 9110], 10.00th=[ 9372], 20.00th=[ 9765], 00:24:08.195 | 30.00th=[10159], 40.00th=[10421], 50.00th=[10552], 60.00th=[10814], 00:24:08.195 | 70.00th=[11207], 80.00th=[11600], 90.00th=[12518], 95.00th=[17695], 00:24:08.195 | 99.00th=[21103], 99.50th=[22676], 99.90th=[26346], 99.95th=[26608], 00:24:08.195 | 99.99th=[26870] 00:24:08.195 bw ( KiB/s): min=19520, max=23688, per=99.86%, avg=21586.00, stdev=2053.64, samples=4 00:24:08.195 iops : min= 4880, max= 5922, avg=5396.50, stdev=513.41, samples=4 00:24:08.195 lat (msec) : 4=0.01%, 10=14.68%, 20=81.93%, 50=3.38% 00:24:08.195 cpu : usr=73.36%, sys=22.96%, ctx=3, majf=0, minf=1557 00:24:08.195 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:24:08.195 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:08.195 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:08.195 issued rwts: total=10893,10857,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:08.195 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:08.195 00:24:08.195 Run status group 0 (all jobs): 00:24:08.195 READ: bw=21.2MiB/s (22.2MB/s), 21.2MiB/s-21.2MiB/s (22.2MB/s-22.2MB/s), io=42.6MiB (44.6MB), run=2009-2009msec 00:24:08.195 WRITE: bw=21.1MiB/s (22.1MB/s), 21.1MiB/s-21.1MiB/s (22.1MB/s-22.1MB/s), io=42.4MiB (44.5MB), run=2009-2009msec 00:24:08.454 ----------------------------------------------------- 00:24:08.454 Suppressions used: 00:24:08.454 count bytes template 00:24:08.454 1 58 /usr/src/fio/parse.c 00:24:08.454 1 8 libtcmalloc_minimal.so 00:24:08.454 ----------------------------------------------------- 00:24:08.454 00:24:08.454 14:28:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:24:08.454 14:28:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:24:08.713 14:28:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:24:08.972 14:28:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:24:08.972 14:28:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:24:09.231 14:28:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:24:09.490 14:28:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:24:10.058 14:28:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:10.058 14:28:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:24:10.058 14:28:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:24:10.058 14:28:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:10.058 14:28:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:24:10.058 14:28:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:10.058 14:28:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:24:10.058 14:28:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:10.058 14:28:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:10.058 rmmod nvme_tcp 00:24:10.058 rmmod nvme_fabrics 00:24:10.058 rmmod nvme_keyring 00:24:10.058 14:28:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:10.059 14:28:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:24:10.059 14:28:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:24:10.059 14:28:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 81456 ']' 00:24:10.059 14:28:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 81456 00:24:10.059 14:28:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@952 -- # '[' -z 81456 ']' 00:24:10.059 14:28:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # kill -0 81456 00:24:10.059 14:28:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # uname 00:24:10.059 14:28:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:10.059 14:28:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81456 00:24:10.059 killing process with pid 81456 00:24:10.059 14:28:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:10.059 14:28:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:10.059 14:28:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81456' 00:24:10.059 14:28:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@971 -- # kill 81456 00:24:10.059 14:28:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@976 -- # wait 81456 00:24:11.437 14:28:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:11.437 14:28:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:11.437 14:28:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:11.437 14:28:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:24:11.437 14:28:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:24:11.437 14:28:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:11.437 14:28:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:24:11.437 14:28:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:11.437 14:28:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:11.437 14:28:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:11.437 14:28:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:11.437 14:28:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:11.696 14:28:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:11.696 14:28:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:11.696 14:28:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:11.696 14:28:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:11.696 14:28:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:11.696 14:28:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:11.696 14:28:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:11.696 14:28:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:11.696 14:28:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:11.696 14:28:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:11.696 14:28:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:11.696 14:28:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:11.696 14:28:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:11.696 14:28:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:11.955 14:28:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:24:11.955 00:24:11.955 real 0m21.545s 00:24:11.955 user 1m29.519s 00:24:11.955 sys 0m5.509s 00:24:11.955 14:28:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:11.955 14:28:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.956 ************************************ 00:24:11.956 END TEST nvmf_fio_host 00:24:11.956 ************************************ 00:24:11.956 14:28:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:11.956 14:28:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:11.956 14:28:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:11.956 14:28:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.956 ************************************ 00:24:11.956 START TEST nvmf_failover 00:24:11.956 ************************************ 00:24:11.956 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:11.956 * Looking for test storage... 00:24:11.956 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:11.956 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:11.956 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lcov --version 00:24:11.956 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:12.214 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:12.214 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:12.214 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:12.214 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:12.214 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:24:12.214 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:24:12.214 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:24:12.214 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:24:12.214 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:24:12.214 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:24:12.214 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:24:12.214 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:12.214 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:24:12.214 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:24:12.214 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:12.214 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:12.214 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:24:12.214 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:24:12.214 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:12.214 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:24:12.214 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:24:12.214 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:24:12.214 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:24:12.214 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:12.214 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:24:12.214 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:24:12.214 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:12.214 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:12.214 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:24:12.214 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:12.214 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:12.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:12.214 --rc genhtml_branch_coverage=1 00:24:12.214 --rc genhtml_function_coverage=1 00:24:12.214 --rc genhtml_legend=1 00:24:12.214 --rc geninfo_all_blocks=1 00:24:12.214 --rc geninfo_unexecuted_blocks=1 00:24:12.214 00:24:12.214 ' 00:24:12.214 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:12.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:12.214 --rc genhtml_branch_coverage=1 00:24:12.214 --rc genhtml_function_coverage=1 00:24:12.214 --rc genhtml_legend=1 00:24:12.214 --rc geninfo_all_blocks=1 00:24:12.214 --rc geninfo_unexecuted_blocks=1 00:24:12.214 00:24:12.214 ' 00:24:12.215 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:12.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:12.215 --rc genhtml_branch_coverage=1 00:24:12.215 --rc genhtml_function_coverage=1 00:24:12.215 --rc genhtml_legend=1 00:24:12.215 --rc geninfo_all_blocks=1 00:24:12.215 --rc geninfo_unexecuted_blocks=1 00:24:12.215 00:24:12.215 ' 00:24:12.215 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:12.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:12.215 --rc genhtml_branch_coverage=1 00:24:12.215 --rc genhtml_function_coverage=1 00:24:12.215 --rc genhtml_legend=1 00:24:12.215 --rc geninfo_all_blocks=1 00:24:12.215 --rc geninfo_unexecuted_blocks=1 00:24:12.215 00:24:12.215 ' 00:24:12.215 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:12.215 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:24:12.215 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:12.215 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:12.215 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:12.215 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:12.215 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:12.215 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:12.215 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:12.215 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:12.215 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:12.215 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:12.215 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:24:12.215 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:24:12.215 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:12.215 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:12.215 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:12.215 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:12.215 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:12.215 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:24:12.215 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:12.215 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:12.215 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:12.215 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.215 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.215 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.215 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:24:12.215 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.215 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:24:12.215 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:12.215 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:12.215 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:12.215 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:12.215 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:12.215 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:12.215 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:12.215 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:12.215 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:12.215 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:12.215 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:12.215 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:12.215 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:12.215 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:12.215 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:24:12.215 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:12.215 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:12.215 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:12.215 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:12.215 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:12.215 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:12.215 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:12.215 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:12.215 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:24:12.215 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:24:12.215 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:24:12.215 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:24:12.215 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:24:12.215 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@460 -- # nvmf_veth_init 00:24:12.215 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:12.215 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:12.215 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:12.215 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:12.215 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:12.215 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:12.215 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:12.215 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:12.215 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:12.215 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:12.215 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:12.215 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:12.215 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:12.215 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:12.215 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:12.215 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:12.215 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:12.215 Cannot find device "nvmf_init_br" 00:24:12.215 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:24:12.215 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:12.215 Cannot find device "nvmf_init_br2" 00:24:12.215 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:24:12.215 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:12.215 Cannot find device "nvmf_tgt_br" 00:24:12.215 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:24:12.215 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:12.215 Cannot find device "nvmf_tgt_br2" 00:24:12.215 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:24:12.215 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:12.215 Cannot find device "nvmf_init_br" 00:24:12.215 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:24:12.215 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:12.215 Cannot find device "nvmf_init_br2" 00:24:12.216 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:24:12.216 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:12.216 Cannot find device "nvmf_tgt_br" 00:24:12.216 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:24:12.216 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:12.475 Cannot find device "nvmf_tgt_br2" 00:24:12.475 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:24:12.475 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:12.475 Cannot find device "nvmf_br" 00:24:12.475 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:24:12.475 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:12.475 Cannot find device "nvmf_init_if" 00:24:12.475 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:24:12.475 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:12.475 Cannot find device "nvmf_init_if2" 00:24:12.475 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:24:12.475 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:12.475 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:12.475 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:24:12.475 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:12.475 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:12.475 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:24:12.475 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:12.475 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:12.475 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:12.475 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:12.475 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:12.475 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:12.475 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:12.475 14:28:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:12.475 14:28:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:12.475 14:28:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:12.475 14:28:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:24:12.475 14:28:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:24:12.475 14:28:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:24:12.475 14:28:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:24:12.475 14:28:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:24:12.475 14:28:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:24:12.475 14:28:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:24:12.475 14:28:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:12.475 14:28:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:12.475 14:28:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:12.475 14:28:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:24:12.475 14:28:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:24:12.475 14:28:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:24:12.475 14:28:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:24:12.475 14:28:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:12.475 14:28:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:12.734 14:28:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:12.734 14:28:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:24:12.734 14:28:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:24:12.734 14:28:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:24:12.734 14:28:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:12.734 14:28:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:12.734 14:28:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:24:12.734 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:12.734 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:24:12.734 00:24:12.734 --- 10.0.0.3 ping statistics --- 00:24:12.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:12.734 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:24:12.734 14:28:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:24:12.734 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:12.734 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.084 ms 00:24:12.734 00:24:12.734 --- 10.0.0.4 ping statistics --- 00:24:12.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:12.734 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:24:12.734 14:28:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:12.734 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:12.734 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:24:12.735 00:24:12.735 --- 10.0.0.1 ping statistics --- 00:24:12.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:12.735 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:24:12.735 14:28:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:24:12.735 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:12.735 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:24:12.735 00:24:12.735 --- 10.0.0.2 ping statistics --- 00:24:12.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:12.735 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:24:12.735 14:28:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:12.735 14:28:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@461 -- # return 0 00:24:12.735 14:28:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:12.735 14:28:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:12.735 14:28:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:12.735 14:28:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:12.735 14:28:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:12.735 14:28:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:12.735 14:28:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:12.735 14:28:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:24:12.735 14:28:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:12.735 14:28:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:12.735 14:28:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:12.735 14:28:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=82051 00:24:12.735 14:28:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:12.735 14:28:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 82051 00:24:12.735 14:28:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 82051 ']' 00:24:12.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:12.735 14:28:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:12.735 14:28:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:12.735 14:28:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:12.735 14:28:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:12.735 14:28:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:12.735 [2024-11-06 14:28:40.306807] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:24:12.735 [2024-11-06 14:28:40.306946] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:12.994 [2024-11-06 14:28:40.493372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:13.253 [2024-11-06 14:28:40.639232] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:13.253 [2024-11-06 14:28:40.639475] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:13.253 [2024-11-06 14:28:40.639543] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:13.253 [2024-11-06 14:28:40.639603] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:13.253 [2024-11-06 14:28:40.639656] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:13.253 [2024-11-06 14:28:40.642180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:13.253 [2024-11-06 14:28:40.642386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:13.253 [2024-11-06 14:28:40.642411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:13.512 [2024-11-06 14:28:40.894544] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:13.512 14:28:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:13.512 14:28:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:24:13.512 14:28:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:13.512 14:28:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:13.512 14:28:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:13.771 14:28:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:13.771 14:28:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:13.771 [2024-11-06 14:28:41.372964] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:13.771 14:28:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:14.339 Malloc0 00:24:14.340 14:28:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:14.340 14:28:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:14.614 14:28:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:14.892 [2024-11-06 14:28:42.300387] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:14.892 14:28:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:24:14.892 [2024-11-06 14:28:42.496481] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:24:14.892 14:28:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:24:15.151 [2024-11-06 14:28:42.704422] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:24:15.151 14:28:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=82104 00:24:15.151 14:28:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:24:15.151 14:28:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:15.151 14:28:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 82104 /var/tmp/bdevperf.sock 00:24:15.151 14:28:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 82104 ']' 00:24:15.151 14:28:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:15.151 14:28:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:15.151 14:28:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:15.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:15.151 14:28:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:15.151 14:28:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:16.089 14:28:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:16.089 14:28:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:24:16.089 14:28:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:16.349 NVMe0n1 00:24:16.349 14:28:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:16.608 00:24:16.608 14:28:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=82129 00:24:16.608 14:28:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:16.608 14:28:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:24:17.986 14:28:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:17.986 [2024-11-06 14:28:45.421908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:24:17.986 [2024-11-06 14:28:45.422207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:24:17.986 [2024-11-06 14:28:45.422332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:24:17.986 [2024-11-06 14:28:45.422495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:24:17.986 [2024-11-06 14:28:45.422595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:24:17.986 [2024-11-06 14:28:45.422728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:24:17.986 [2024-11-06 14:28:45.422869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:24:17.986 [2024-11-06 14:28:45.422977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:24:17.986 [2024-11-06 14:28:45.423096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:24:17.986 [2024-11-06 14:28:45.423215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:24:17.986 [2024-11-06 14:28:45.423230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:24:17.986 [2024-11-06 14:28:45.423244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:24:17.986 [2024-11-06 14:28:45.423256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:24:17.986 [2024-11-06 14:28:45.423270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:24:17.986 [2024-11-06 14:28:45.423281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:24:17.986 [2024-11-06 14:28:45.423295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:24:17.986 [2024-11-06 14:28:45.423306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:24:17.986 [2024-11-06 14:28:45.423320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:24:17.986 [2024-11-06 14:28:45.423332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:24:17.986 [2024-11-06 14:28:45.423346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:24:17.986 [2024-11-06 14:28:45.423357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:24:17.986 [2024-11-06 14:28:45.423374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:24:17.986 [2024-11-06 14:28:45.423385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:24:17.986 [2024-11-06 14:28:45.423399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:24:17.986 [2024-11-06 14:28:45.423410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:24:17.986 [2024-11-06 14:28:45.423427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:24:17.986 [2024-11-06 14:28:45.423438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:24:17.986 [2024-11-06 14:28:45.423452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:24:17.986 [2024-11-06 14:28:45.423463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:24:17.986 [2024-11-06 14:28:45.423477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:24:17.986 [2024-11-06 14:28:45.423488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:24:17.986 [2024-11-06 14:28:45.423501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:24:17.986 [2024-11-06 14:28:45.423512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:24:17.986 [2024-11-06 14:28:45.423526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:24:17.986 [2024-11-06 14:28:45.423537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:24:17.986 [2024-11-06 14:28:45.423550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:24:17.987 [2024-11-06 14:28:45.423568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:24:17.987 [2024-11-06 14:28:45.423587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:24:17.987 [2024-11-06 14:28:45.423598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:24:17.987 [2024-11-06 14:28:45.423612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:24:17.987 [2024-11-06 14:28:45.423623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:24:17.987 [2024-11-06 14:28:45.423641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:24:17.987 [2024-11-06 14:28:45.423652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:24:17.987 [2024-11-06 14:28:45.423665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:24:17.987 [2024-11-06 14:28:45.423675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:24:17.987 [2024-11-06 14:28:45.423689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:24:17.987 [2024-11-06 14:28:45.423700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:24:17.987 [2024-11-06 14:28:45.423715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:24:17.987 [2024-11-06 14:28:45.423726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:24:17.987 [2024-11-06 14:28:45.423740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:24:17.987 [2024-11-06 14:28:45.423751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:24:17.987 [2024-11-06 14:28:45.423765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:24:17.987 [2024-11-06 14:28:45.423775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:24:17.987 [2024-11-06 14:28:45.423790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:24:17.987 [2024-11-06 14:28:45.423801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:24:17.987 [2024-11-06 14:28:45.423815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:24:17.987 [2024-11-06 14:28:45.423826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:24:17.987 [2024-11-06 14:28:45.423853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:24:17.987 [2024-11-06 14:28:45.423865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:24:17.987 [2024-11-06 14:28:45.423879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:24:17.987 [2024-11-06 14:28:45.423890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:24:17.987 [2024-11-06 14:28:45.423903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:24:17.987 [2024-11-06 14:28:45.423915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:24:17.987 [2024-11-06 14:28:45.423928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:24:17.987 [2024-11-06 14:28:45.423939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:24:17.987 [2024-11-06 14:28:45.423953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:24:17.987 [2024-11-06 14:28:45.423964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:24:17.987 [2024-11-06 14:28:45.423977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:24:17.987 [2024-11-06 14:28:45.423988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:24:17.987 [2024-11-06 14:28:45.424003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:24:17.987 [2024-11-06 14:28:45.424014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:24:17.987 [2024-11-06 14:28:45.424027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:24:17.987 [2024-11-06 14:28:45.424038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:24:17.987 [2024-11-06 14:28:45.424057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:24:17.987 [2024-11-06 14:28:45.424068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:24:17.987 [2024-11-06 14:28:45.424082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:24:17.987 [2024-11-06 14:28:45.424092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:24:17.987 [2024-11-06 14:28:45.424106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:24:17.987 [2024-11-06 14:28:45.424117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:24:17.987 [2024-11-06 14:28:45.424131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:24:17.987 [2024-11-06 14:28:45.424142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:24:17.987 [2024-11-06 14:28:45.424172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:24:17.987 [2024-11-06 14:28:45.424184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:24:17.987 [2024-11-06 14:28:45.424198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:24:17.987 [2024-11-06 14:28:45.424217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:24:17.987 [2024-11-06 14:28:45.424231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:24:17.987 [2024-11-06 14:28:45.424242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:24:17.987 [2024-11-06 14:28:45.424256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:24:17.987 [2024-11-06 14:28:45.424266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:24:17.987 [2024-11-06 14:28:45.424282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:24:17.987 14:28:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:24:21.276 14:28:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:21.276 00:24:21.276 14:28:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:24:21.535 [2024-11-06 14:28:48.953058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:24:21.535 14:28:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:24:24.821 14:28:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:24.821 [2024-11-06 14:28:52.215424] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:24.821 14:28:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:24:25.755 14:28:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:24:26.014 [2024-11-06 14:28:53.444542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:24:26.014 [2024-11-06 14:28:53.444760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:24:26.014 [2024-11-06 14:28:53.444785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:24:26.014 [2024-11-06 14:28:53.444797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:24:26.014 [2024-11-06 14:28:53.444813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:24:26.014 [2024-11-06 14:28:53.444824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:24:26.014 14:28:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 82129 00:24:32.581 { 00:24:32.581 "results": [ 00:24:32.581 { 00:24:32.581 "job": "NVMe0n1", 00:24:32.581 "core_mask": "0x1", 00:24:32.581 "workload": "verify", 00:24:32.581 "status": "finished", 00:24:32.582 "verify_range": { 00:24:32.582 "start": 0, 00:24:32.582 "length": 16384 00:24:32.582 }, 00:24:32.582 "queue_depth": 128, 00:24:32.582 "io_size": 4096, 00:24:32.582 "runtime": 15.012324, 00:24:32.582 "iops": 8468.442327783494, 00:24:32.582 "mibps": 33.07985284290427, 00:24:32.582 "io_failed": 3813, 00:24:32.582 "io_timeout": 0, 00:24:32.582 "avg_latency_us": 14647.742525919908, 00:24:32.582 "min_latency_us": 509.9437751004016, 00:24:32.582 "max_latency_us": 20634.6281124498 00:24:32.582 } 00:24:32.582 ], 00:24:32.582 "core_count": 1 00:24:32.582 } 00:24:32.582 14:28:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 82104 00:24:32.582 14:28:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 82104 ']' 00:24:32.582 14:28:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 82104 00:24:32.582 14:28:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:24:32.582 14:28:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:32.582 14:28:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 82104 00:24:32.582 killing process with pid 82104 00:24:32.582 14:28:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:32.582 14:28:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:32.582 14:28:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 82104' 00:24:32.582 14:28:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 82104 00:24:32.582 14:28:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 82104 00:24:33.157 14:29:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:33.157 [2024-11-06 14:28:42.822022] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:24:33.157 [2024-11-06 14:28:42.822160] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82104 ] 00:24:33.157 [2024-11-06 14:28:43.006185] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:33.157 [2024-11-06 14:28:43.145829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:33.157 [2024-11-06 14:28:43.373521] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:33.157 Running I/O for 15 seconds... 00:24:33.157 7829.00 IOPS, 30.58 MiB/s [2024-11-06T14:29:00.792Z] [2024-11-06 14:28:45.424378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:68032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.157 [2024-11-06 14:28:45.424437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.157 [2024-11-06 14:28:45.424488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:68040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.157 [2024-11-06 14:28:45.424510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.157 [2024-11-06 14:28:45.424540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:68048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.157 [2024-11-06 14:28:45.424560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.157 [2024-11-06 14:28:45.424586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:68056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.157 [2024-11-06 14:28:45.424612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.157 [2024-11-06 14:28:45.424645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:68064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.157 [2024-11-06 14:28:45.424670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.157 [2024-11-06 14:28:45.424695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:68072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.157 [2024-11-06 14:28:45.424711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.157 [2024-11-06 14:28:45.424734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:68080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.157 [2024-11-06 14:28:45.424751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.157 [2024-11-06 14:28:45.424784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:68088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.157 [2024-11-06 14:28:45.424810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.157 [2024-11-06 14:28:45.424849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:68096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.157 [2024-11-06 14:28:45.424873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.157 [2024-11-06 14:28:45.424907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:68104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.157 [2024-11-06 14:28:45.424930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.157 [2024-11-06 14:28:45.424962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:68112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.157 [2024-11-06 14:28:45.425004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.157 [2024-11-06 14:28:45.425032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:68120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.157 [2024-11-06 14:28:45.425055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.157 [2024-11-06 14:28:45.425080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:68128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.157 [2024-11-06 14:28:45.425100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.157 [2024-11-06 14:28:45.425124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:68136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.157 [2024-11-06 14:28:45.425145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.158 [2024-11-06 14:28:45.425169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:68144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.158 [2024-11-06 14:28:45.425189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.158 [2024-11-06 14:28:45.425214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:68152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.158 [2024-11-06 14:28:45.425235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.158 [2024-11-06 14:28:45.425259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:68160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.158 [2024-11-06 14:28:45.425283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.158 [2024-11-06 14:28:45.425308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:68168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.158 [2024-11-06 14:28:45.425327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.158 [2024-11-06 14:28:45.425357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:68176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.158 [2024-11-06 14:28:45.425377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.158 [2024-11-06 14:28:45.425402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:68184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.158 [2024-11-06 14:28:45.425422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.158 [2024-11-06 14:28:45.425447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:68192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.158 [2024-11-06 14:28:45.425466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.158 [2024-11-06 14:28:45.425492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:68200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.158 [2024-11-06 14:28:45.425512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.158 [2024-11-06 14:28:45.425536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:68208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.158 [2024-11-06 14:28:45.425557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.158 [2024-11-06 14:28:45.425588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:68216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.158 [2024-11-06 14:28:45.425608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.158 [2024-11-06 14:28:45.425633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:68224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.158 [2024-11-06 14:28:45.425653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.158 [2024-11-06 14:28:45.425678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:68232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.158 [2024-11-06 14:28:45.425698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.158 [2024-11-06 14:28:45.425726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:68240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.158 [2024-11-06 14:28:45.425746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.158 [2024-11-06 14:28:45.425770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:68248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.158 [2024-11-06 14:28:45.425790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.158 [2024-11-06 14:28:45.425814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:68256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.158 [2024-11-06 14:28:45.425845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.158 [2024-11-06 14:28:45.425870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:68264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.158 [2024-11-06 14:28:45.425890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.158 [2024-11-06 14:28:45.425916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:68272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.158 [2024-11-06 14:28:45.425936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.158 [2024-11-06 14:28:45.425962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:68280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.158 [2024-11-06 14:28:45.425982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.158 [2024-11-06 14:28:45.426008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:68288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.158 [2024-11-06 14:28:45.426030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.158 [2024-11-06 14:28:45.426054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:68296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.158 [2024-11-06 14:28:45.426073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.158 [2024-11-06 14:28:45.426100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:68304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.158 [2024-11-06 14:28:45.426120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.158 [2024-11-06 14:28:45.426145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:68312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.158 [2024-11-06 14:28:45.426171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.158 [2024-11-06 14:28:45.426196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:68320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.158 [2024-11-06 14:28:45.426216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.158 [2024-11-06 14:28:45.426240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:68328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.158 [2024-11-06 14:28:45.426260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.158 [2024-11-06 14:28:45.426284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:68336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.158 [2024-11-06 14:28:45.426304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.158 [2024-11-06 14:28:45.426328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:68344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.158 [2024-11-06 14:28:45.426348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.158 [2024-11-06 14:28:45.426372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:68352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.158 [2024-11-06 14:28:45.426393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.158 [2024-11-06 14:28:45.426417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:68360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.158 [2024-11-06 14:28:45.426437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.158 [2024-11-06 14:28:45.426472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:68368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.158 [2024-11-06 14:28:45.426495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.158 [2024-11-06 14:28:45.426522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:68376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.158 [2024-11-06 14:28:45.426555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.158 [2024-11-06 14:28:45.426581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:68384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.158 [2024-11-06 14:28:45.426601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.158 [2024-11-06 14:28:45.426625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:68392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.158 [2024-11-06 14:28:45.426645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.158 [2024-11-06 14:28:45.426670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:68400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.158 [2024-11-06 14:28:45.426690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.158 [2024-11-06 14:28:45.426714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:68408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.158 [2024-11-06 14:28:45.426734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.158 [2024-11-06 14:28:45.426759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:68416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.158 [2024-11-06 14:28:45.426787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.158 [2024-11-06 14:28:45.426812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:68424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.158 [2024-11-06 14:28:45.426832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.158 [2024-11-06 14:28:45.426883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:68432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.158 [2024-11-06 14:28:45.426903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.158 [2024-11-06 14:28:45.426927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:68440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.158 [2024-11-06 14:28:45.426947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.158 [2024-11-06 14:28:45.426971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:68448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.158 [2024-11-06 14:28:45.426991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.158 [2024-11-06 14:28:45.427015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:68456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.158 [2024-11-06 14:28:45.427035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.159 [2024-11-06 14:28:45.427059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:68464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.159 [2024-11-06 14:28:45.427079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.159 [2024-11-06 14:28:45.427105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:68472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.159 [2024-11-06 14:28:45.427125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.159 [2024-11-06 14:28:45.427149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:68480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.159 [2024-11-06 14:28:45.427169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.159 [2024-11-06 14:28:45.427193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:68488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.159 [2024-11-06 14:28:45.427213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.159 [2024-11-06 14:28:45.427239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:68496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.159 [2024-11-06 14:28:45.427259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.159 [2024-11-06 14:28:45.427284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:68504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.159 [2024-11-06 14:28:45.427303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.159 [2024-11-06 14:28:45.427328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:68512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.159 [2024-11-06 14:28:45.427348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.159 [2024-11-06 14:28:45.427377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:68520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.159 [2024-11-06 14:28:45.427397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.159 [2024-11-06 14:28:45.427421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:68528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.159 [2024-11-06 14:28:45.427446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.159 [2024-11-06 14:28:45.427471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:68536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.159 [2024-11-06 14:28:45.427490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.159 [2024-11-06 14:28:45.427514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:68544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.159 [2024-11-06 14:28:45.427536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.159 [2024-11-06 14:28:45.427560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:68552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.159 [2024-11-06 14:28:45.427580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.159 [2024-11-06 14:28:45.427607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:68560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.159 [2024-11-06 14:28:45.427628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.159 [2024-11-06 14:28:45.427654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:68568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.159 [2024-11-06 14:28:45.427674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.159 [2024-11-06 14:28:45.427698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:68576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.159 [2024-11-06 14:28:45.427718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.159 [2024-11-06 14:28:45.427743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:68584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.159 [2024-11-06 14:28:45.427762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.159 [2024-11-06 14:28:45.427786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:68592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.159 [2024-11-06 14:28:45.427807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.159 [2024-11-06 14:28:45.427831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:68600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.159 [2024-11-06 14:28:45.427860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.159 [2024-11-06 14:28:45.427886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:68608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.159 [2024-11-06 14:28:45.427906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.159 [2024-11-06 14:28:45.427931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:68616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.159 [2024-11-06 14:28:45.427957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.159 [2024-11-06 14:28:45.427985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:68624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.159 [2024-11-06 14:28:45.428005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.159 [2024-11-06 14:28:45.428031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:68632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.159 [2024-11-06 14:28:45.428051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.159 [2024-11-06 14:28:45.428075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:68640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.159 [2024-11-06 14:28:45.428095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.159 [2024-11-06 14:28:45.428120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:68648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.159 [2024-11-06 14:28:45.428140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.159 [2024-11-06 14:28:45.428164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:68656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.159 [2024-11-06 14:28:45.428185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.159 [2024-11-06 14:28:45.428210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:68664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.159 [2024-11-06 14:28:45.428230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.159 [2024-11-06 14:28:45.428256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:68672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.159 [2024-11-06 14:28:45.428279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.159 [2024-11-06 14:28:45.428304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:68680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.159 [2024-11-06 14:28:45.428324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.159 [2024-11-06 14:28:45.428352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:68688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.159 [2024-11-06 14:28:45.428372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.159 [2024-11-06 14:28:45.428396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:68696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.159 [2024-11-06 14:28:45.428416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.159 [2024-11-06 14:28:45.428442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:68704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.159 [2024-11-06 14:28:45.428462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.159 [2024-11-06 14:28:45.428487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:68712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.159 [2024-11-06 14:28:45.428507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.159 [2024-11-06 14:28:45.428537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:68720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.159 [2024-11-06 14:28:45.428557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.159 [2024-11-06 14:28:45.428582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:68728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.159 [2024-11-06 14:28:45.428602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.159 [2024-11-06 14:28:45.428627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:68736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.159 [2024-11-06 14:28:45.428647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.159 [2024-11-06 14:28:45.428671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:68744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.159 [2024-11-06 14:28:45.428691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.159 [2024-11-06 14:28:45.428718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:68752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.159 [2024-11-06 14:28:45.428739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.160 [2024-11-06 14:28:45.428762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:68760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.160 [2024-11-06 14:28:45.428782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.160 [2024-11-06 14:28:45.428809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:68784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.160 [2024-11-06 14:28:45.428830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.160 [2024-11-06 14:28:45.428864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:68792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.160 [2024-11-06 14:28:45.428884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.160 [2024-11-06 14:28:45.428908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:68800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.160 [2024-11-06 14:28:45.428929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.160 [2024-11-06 14:28:45.428954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:68808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.160 [2024-11-06 14:28:45.428974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.160 [2024-11-06 14:28:45.428999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:68816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.160 [2024-11-06 14:28:45.429021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.160 [2024-11-06 14:28:45.429046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:68824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.160 [2024-11-06 14:28:45.429066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.160 [2024-11-06 14:28:45.429093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:68832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.160 [2024-11-06 14:28:45.429118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.160 [2024-11-06 14:28:45.429143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:68840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.160 [2024-11-06 14:28:45.429163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.160 [2024-11-06 14:28:45.429188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:68848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.160 [2024-11-06 14:28:45.429208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.160 [2024-11-06 14:28:45.429232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:68856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.160 [2024-11-06 14:28:45.429252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.160 [2024-11-06 14:28:45.429278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:68864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.160 [2024-11-06 14:28:45.429298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.160 [2024-11-06 14:28:45.429323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:68872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.160 [2024-11-06 14:28:45.429343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.160 [2024-11-06 14:28:45.429368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:68880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.160 [2024-11-06 14:28:45.429389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.160 [2024-11-06 14:28:45.429415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:68888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.160 [2024-11-06 14:28:45.429436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.160 [2024-11-06 14:28:45.429464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:68896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.160 [2024-11-06 14:28:45.429483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.160 [2024-11-06 14:28:45.429508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:68768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.160 [2024-11-06 14:28:45.429538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.160 [2024-11-06 14:28:45.429564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:68776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.160 [2024-11-06 14:28:45.429584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.160 [2024-11-06 14:28:45.429609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:68904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.160 [2024-11-06 14:28:45.429629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.160 [2024-11-06 14:28:45.429653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:68912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.160 [2024-11-06 14:28:45.429673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.160 [2024-11-06 14:28:45.429699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:68920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.160 [2024-11-06 14:28:45.429725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.160 [2024-11-06 14:28:45.429750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:68928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.160 [2024-11-06 14:28:45.429771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.160 [2024-11-06 14:28:45.429796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:68936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.160 [2024-11-06 14:28:45.429816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.160 [2024-11-06 14:28:45.429853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:68944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.160 [2024-11-06 14:28:45.429874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.160 [2024-11-06 14:28:45.429899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:68952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.160 [2024-11-06 14:28:45.429919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.160 [2024-11-06 14:28:45.429945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:68960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.160 [2024-11-06 14:28:45.429966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.160 [2024-11-06 14:28:45.429989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:68968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.160 [2024-11-06 14:28:45.430009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.160 [2024-11-06 14:28:45.430033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:68976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.160 [2024-11-06 14:28:45.430053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.160 [2024-11-06 14:28:45.430077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:68984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.160 [2024-11-06 14:28:45.430097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.160 [2024-11-06 14:28:45.430122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:68992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.160 [2024-11-06 14:28:45.430142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.160 [2024-11-06 14:28:45.430166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:69000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.160 [2024-11-06 14:28:45.430186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.160 [2024-11-06 14:28:45.430214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.160 [2024-11-06 14:28:45.430234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.160 [2024-11-06 14:28:45.430258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:69016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.160 [2024-11-06 14:28:45.430278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.160 [2024-11-06 14:28:45.430308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:69024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.161 [2024-11-06 14:28:45.430328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.161 [2024-11-06 14:28:45.430352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:69032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.161 [2024-11-06 14:28:45.430372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.161 [2024-11-06 14:28:45.430396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:69040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.161 [2024-11-06 14:28:45.430417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.161 [2024-11-06 14:28:45.430440] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b780 is same with the state(6) to be set 00:24:33.161 [2024-11-06 14:28:45.430473] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:33.161 [2024-11-06 14:28:45.430495] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:33.161 [2024-11-06 14:28:45.430516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69048 len:8 PRP1 0x0 PRP2 0x0 00:24:33.161 [2024-11-06 14:28:45.430542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.161 [2024-11-06 14:28:45.430848] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:24:33.161 [2024-11-06 14:28:45.430925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:33.161 [2024-11-06 14:28:45.430953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.161 [2024-11-06 14:28:45.430976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:33.161 [2024-11-06 14:28:45.430995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.161 [2024-11-06 14:28:45.431016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:33.161 [2024-11-06 14:28:45.431036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.161 [2024-11-06 14:28:45.431057] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:33.161 [2024-11-06 14:28:45.431077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.161 [2024-11-06 14:28:45.431103] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:24:33.161 [2024-11-06 14:28:45.431174] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:24:33.161 [2024-11-06 14:28:45.434264] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:24:33.161 [2024-11-06 14:28:45.459972] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:24:33.161 8334.00 IOPS, 32.55 MiB/s [2024-11-06T14:29:00.796Z] 8749.00 IOPS, 34.18 MiB/s [2024-11-06T14:29:00.796Z] 9025.75 IOPS, 35.26 MiB/s [2024-11-06T14:29:00.796Z] [2024-11-06 14:28:48.953464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:86296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.161 [2024-11-06 14:28:48.953521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.161 [2024-11-06 14:28:48.953574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:86304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.161 [2024-11-06 14:28:48.953592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.161 [2024-11-06 14:28:48.953612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.161 [2024-11-06 14:28:48.953628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.161 [2024-11-06 14:28:48.953646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:86768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.161 [2024-11-06 14:28:48.953662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.161 [2024-11-06 14:28:48.953680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:86776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.161 [2024-11-06 14:28:48.953696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.161 [2024-11-06 14:28:48.953714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:86784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.161 [2024-11-06 14:28:48.953730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.161 [2024-11-06 14:28:48.953748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:86792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.161 [2024-11-06 14:28:48.953764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.161 [2024-11-06 14:28:48.953781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:86800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.161 [2024-11-06 14:28:48.953796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.161 [2024-11-06 14:28:48.953814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:86808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.161 [2024-11-06 14:28:48.953830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.161 [2024-11-06 14:28:48.953861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:86816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.161 [2024-11-06 14:28:48.953877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.161 [2024-11-06 14:28:48.953895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:86312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.161 [2024-11-06 14:28:48.953911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.161 [2024-11-06 14:28:48.953929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:86320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.161 [2024-11-06 14:28:48.953944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.161 [2024-11-06 14:28:48.953962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:86328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.161 [2024-11-06 14:28:48.953978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.161 [2024-11-06 14:28:48.953996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:86336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.161 [2024-11-06 14:28:48.954023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.161 [2024-11-06 14:28:48.954042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:86344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.161 [2024-11-06 14:28:48.954057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.161 [2024-11-06 14:28:48.954076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:86352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.161 [2024-11-06 14:28:48.954092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.161 [2024-11-06 14:28:48.954110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:86360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.161 [2024-11-06 14:28:48.954128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.161 [2024-11-06 14:28:48.954146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:86368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.161 [2024-11-06 14:28:48.954163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.161 [2024-11-06 14:28:48.954181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:86376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.161 [2024-11-06 14:28:48.954198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.161 [2024-11-06 14:28:48.954216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:86384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.161 [2024-11-06 14:28:48.954232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.161 [2024-11-06 14:28:48.954251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:86392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.161 [2024-11-06 14:28:48.954267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.161 [2024-11-06 14:28:48.954284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:86400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.161 [2024-11-06 14:28:48.954300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.161 [2024-11-06 14:28:48.954319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:86408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.161 [2024-11-06 14:28:48.954335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.161 [2024-11-06 14:28:48.954353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:86416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.161 [2024-11-06 14:28:48.954369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.161 [2024-11-06 14:28:48.954387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:86424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.161 [2024-11-06 14:28:48.954403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.161 [2024-11-06 14:28:48.954421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:86432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.161 [2024-11-06 14:28:48.954437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.161 [2024-11-06 14:28:48.954461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:86440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.161 [2024-11-06 14:28:48.954487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.162 [2024-11-06 14:28:48.954504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:86448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.162 [2024-11-06 14:28:48.954520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.162 [2024-11-06 14:28:48.954538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:86456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.162 [2024-11-06 14:28:48.954554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.162 [2024-11-06 14:28:48.954572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:86464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.162 [2024-11-06 14:28:48.954588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.162 [2024-11-06 14:28:48.954606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:86472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.162 [2024-11-06 14:28:48.954621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.162 [2024-11-06 14:28:48.954639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:86480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.162 [2024-11-06 14:28:48.954655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.162 [2024-11-06 14:28:48.954674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:86488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.162 [2024-11-06 14:28:48.954707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.162 [2024-11-06 14:28:48.954726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:86496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.162 [2024-11-06 14:28:48.954743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.162 [2024-11-06 14:28:48.954761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:86824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.162 [2024-11-06 14:28:48.954778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.162 [2024-11-06 14:28:48.954796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:86832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.162 [2024-11-06 14:28:48.954812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.162 [2024-11-06 14:28:48.954830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:86840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.162 [2024-11-06 14:28:48.954857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.162 [2024-11-06 14:28:48.954876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:86848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.162 [2024-11-06 14:28:48.954892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.162 [2024-11-06 14:28:48.954910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:86856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.162 [2024-11-06 14:28:48.954934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.162 [2024-11-06 14:28:48.954952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:86864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.162 [2024-11-06 14:28:48.954968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.162 [2024-11-06 14:28:48.954987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:86872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.162 [2024-11-06 14:28:48.955003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.162 [2024-11-06 14:28:48.955021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:86880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.162 [2024-11-06 14:28:48.955037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.162 [2024-11-06 14:28:48.955055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:86504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.162 [2024-11-06 14:28:48.955071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.162 [2024-11-06 14:28:48.955089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:86512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.162 [2024-11-06 14:28:48.955105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.162 [2024-11-06 14:28:48.955123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:86520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.162 [2024-11-06 14:28:48.955140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.162 [2024-11-06 14:28:48.955157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:86528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.162 [2024-11-06 14:28:48.955173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.162 [2024-11-06 14:28:48.955190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:86536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.162 [2024-11-06 14:28:48.955206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.162 [2024-11-06 14:28:48.955223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:86544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.162 [2024-11-06 14:28:48.955239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.162 [2024-11-06 14:28:48.955256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:86552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.162 [2024-11-06 14:28:48.955272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.162 [2024-11-06 14:28:48.955289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:86560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.162 [2024-11-06 14:28:48.955305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.162 [2024-11-06 14:28:48.955323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:86568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.162 [2024-11-06 14:28:48.955338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.162 [2024-11-06 14:28:48.955356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:86576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.162 [2024-11-06 14:28:48.955376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.162 [2024-11-06 14:28:48.955394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:86584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.162 [2024-11-06 14:28:48.955410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.162 [2024-11-06 14:28:48.955428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:86592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.162 [2024-11-06 14:28:48.955444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.162 [2024-11-06 14:28:48.955461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:86600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.162 [2024-11-06 14:28:48.955477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.162 [2024-11-06 14:28:48.955495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:86608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.162 [2024-11-06 14:28:48.955511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.162 [2024-11-06 14:28:48.955528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:86616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.162 [2024-11-06 14:28:48.955544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.163 [2024-11-06 14:28:48.955561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:86624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.163 [2024-11-06 14:28:48.955577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.163 [2024-11-06 14:28:48.955595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:86888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.163 [2024-11-06 14:28:48.955611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.163 [2024-11-06 14:28:48.955628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:86896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.163 [2024-11-06 14:28:48.955644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.163 [2024-11-06 14:28:48.955662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:86904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.163 [2024-11-06 14:28:48.955678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.163 [2024-11-06 14:28:48.955695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:86912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.163 [2024-11-06 14:28:48.955711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.163 [2024-11-06 14:28:48.955728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:86920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.163 [2024-11-06 14:28:48.955744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.163 [2024-11-06 14:28:48.955761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:86928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.163 [2024-11-06 14:28:48.955777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.163 [2024-11-06 14:28:48.955801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:86936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.163 [2024-11-06 14:28:48.955817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.163 [2024-11-06 14:28:48.955844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:86944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.163 [2024-11-06 14:28:48.955861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.163 [2024-11-06 14:28:48.955879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:86952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.163 [2024-11-06 14:28:48.955894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.163 [2024-11-06 14:28:48.955912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:86960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.163 [2024-11-06 14:28:48.955928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.163 [2024-11-06 14:28:48.955946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:86968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.163 [2024-11-06 14:28:48.955963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.163 [2024-11-06 14:28:48.955982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:86976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.163 [2024-11-06 14:28:48.955998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.163 [2024-11-06 14:28:48.956016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:86984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.163 [2024-11-06 14:28:48.956031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.163 [2024-11-06 14:28:48.956049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:86992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.163 [2024-11-06 14:28:48.956065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.163 [2024-11-06 14:28:48.956082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:87000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.163 [2024-11-06 14:28:48.956098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.163 [2024-11-06 14:28:48.956115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:87008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.163 [2024-11-06 14:28:48.956131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.163 [2024-11-06 14:28:48.956149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:86632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.163 [2024-11-06 14:28:48.956165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.163 [2024-11-06 14:28:48.956183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:86640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.163 [2024-11-06 14:28:48.956198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.163 [2024-11-06 14:28:48.956216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:86648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.163 [2024-11-06 14:28:48.956237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.163 [2024-11-06 14:28:48.956255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:86656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.163 [2024-11-06 14:28:48.956270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.163 [2024-11-06 14:28:48.956287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:86664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.163 [2024-11-06 14:28:48.956303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.163 [2024-11-06 14:28:48.956321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:86672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.163 [2024-11-06 14:28:48.956338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.163 [2024-11-06 14:28:48.956356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:86680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.163 [2024-11-06 14:28:48.956372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.163 [2024-11-06 14:28:48.956389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:86688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.163 [2024-11-06 14:28:48.956405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.163 [2024-11-06 14:28:48.956422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:87016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.163 [2024-11-06 14:28:48.956438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.163 [2024-11-06 14:28:48.956456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.163 [2024-11-06 14:28:48.956472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.163 [2024-11-06 14:28:48.956490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:87032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.163 [2024-11-06 14:28:48.956505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.163 [2024-11-06 14:28:48.956527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:87040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.163 [2024-11-06 14:28:48.956543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.163 [2024-11-06 14:28:48.956562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:87048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.163 [2024-11-06 14:28:48.956578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.163 [2024-11-06 14:28:48.956596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:87056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.163 [2024-11-06 14:28:48.956612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.163 [2024-11-06 14:28:48.956629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:87064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.163 [2024-11-06 14:28:48.956645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.163 [2024-11-06 14:28:48.956668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:87072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.163 [2024-11-06 14:28:48.956684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.163 [2024-11-06 14:28:48.956702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:87080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.163 [2024-11-06 14:28:48.956717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.163 [2024-11-06 14:28:48.956735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:87088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.163 [2024-11-06 14:28:48.956750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.163 [2024-11-06 14:28:48.956768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:87096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.163 [2024-11-06 14:28:48.956784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.163 [2024-11-06 14:28:48.956801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:87104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.163 [2024-11-06 14:28:48.956817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.163 [2024-11-06 14:28:48.956844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:87112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.164 [2024-11-06 14:28:48.956861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.164 [2024-11-06 14:28:48.956879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:87120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.164 [2024-11-06 14:28:48.956895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.164 [2024-11-06 14:28:48.956914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:87128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.164 [2024-11-06 14:28:48.956943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.164 [2024-11-06 14:28:48.956962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:87136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.164 [2024-11-06 14:28:48.956978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.164 [2024-11-06 14:28:48.956996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.164 [2024-11-06 14:28:48.957012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.164 [2024-11-06 14:28:48.957030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:87152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.164 [2024-11-06 14:28:48.957046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.164 [2024-11-06 14:28:48.957064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:87160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.164 [2024-11-06 14:28:48.957080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.164 [2024-11-06 14:28:48.957099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:87168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.164 [2024-11-06 14:28:48.957115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.164 [2024-11-06 14:28:48.957139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:86696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.164 [2024-11-06 14:28:48.957155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.164 [2024-11-06 14:28:48.957172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:86704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.164 [2024-11-06 14:28:48.957188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.164 [2024-11-06 14:28:48.957207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:86712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.164 [2024-11-06 14:28:48.957223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.164 [2024-11-06 14:28:48.957241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:86720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.164 [2024-11-06 14:28:48.957257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.164 [2024-11-06 14:28:48.957275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:86728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.164 [2024-11-06 14:28:48.957290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.164 [2024-11-06 14:28:48.957308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:86736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.164 [2024-11-06 14:28:48.957324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.164 [2024-11-06 14:28:48.957342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:86744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.164 [2024-11-06 14:28:48.957358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.164 [2024-11-06 14:28:48.957375] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ba00 is same with the state(6) to be set 00:24:33.164 [2024-11-06 14:28:48.957395] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:33.164 [2024-11-06 14:28:48.957408] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:33.164 [2024-11-06 14:28:48.957426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86752 len:8 PRP1 0x0 PRP2 0x0 00:24:33.164 [2024-11-06 14:28:48.957442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.164 [2024-11-06 14:28:48.957459] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:33.164 [2024-11-06 14:28:48.957471] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:33.164 [2024-11-06 14:28:48.957484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87176 len:8 PRP1 0x0 PRP2 0x0 00:24:33.164 [2024-11-06 14:28:48.957500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.164 [2024-11-06 14:28:48.957515] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:33.164 [2024-11-06 14:28:48.957527] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:33.164 [2024-11-06 14:28:48.957540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87184 len:8 PRP1 0x0 PRP2 0x0 00:24:33.164 [2024-11-06 14:28:48.957555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.164 [2024-11-06 14:28:48.957577] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:33.164 [2024-11-06 14:28:48.957589] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:33.164 [2024-11-06 14:28:48.957601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87192 len:8 PRP1 0x0 PRP2 0x0 00:24:33.164 [2024-11-06 14:28:48.957618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.164 [2024-11-06 14:28:48.957633] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:33.164 [2024-11-06 14:28:48.957645] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:33.164 [2024-11-06 14:28:48.957657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87200 len:8 PRP1 0x0 PRP2 0x0 00:24:33.164 [2024-11-06 14:28:48.957673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.164 [2024-11-06 14:28:48.957688] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:33.164 [2024-11-06 14:28:48.957700] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:33.164 [2024-11-06 14:28:48.957712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87208 len:8 PRP1 0x0 PRP2 0x0 00:24:33.164 [2024-11-06 14:28:48.957728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.164 [2024-11-06 14:28:48.957743] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:33.164 [2024-11-06 14:28:48.957754] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:33.164 [2024-11-06 14:28:48.957766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87216 len:8 PRP1 0x0 PRP2 0x0 00:24:33.164 [2024-11-06 14:28:48.957782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.164 [2024-11-06 14:28:48.957797] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:33.164 [2024-11-06 14:28:48.957809] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:33.164 [2024-11-06 14:28:48.957821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87224 len:8 PRP1 0x0 PRP2 0x0 00:24:33.164 [2024-11-06 14:28:48.957846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.164 [2024-11-06 14:28:48.957862] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:33.164 [2024-11-06 14:28:48.957874] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:33.164 [2024-11-06 14:28:48.957888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87232 len:8 PRP1 0x0 PRP2 0x0 00:24:33.164 [2024-11-06 14:28:48.957903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.164 [2024-11-06 14:28:48.957918] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:33.164 [2024-11-06 14:28:48.957930] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:33.164 [2024-11-06 14:28:48.957942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87240 len:8 PRP1 0x0 PRP2 0x0 00:24:33.164 [2024-11-06 14:28:48.957959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.164 [2024-11-06 14:28:48.957974] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:33.164 [2024-11-06 14:28:48.957985] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:33.164 [2024-11-06 14:28:48.957997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87248 len:8 PRP1 0x0 PRP2 0x0 00:24:33.164 [2024-11-06 14:28:48.958019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.164 [2024-11-06 14:28:48.958035] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:33.164 [2024-11-06 14:28:48.958047] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:33.164 [2024-11-06 14:28:48.958059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87256 len:8 PRP1 0x0 PRP2 0x0 00:24:33.164 [2024-11-06 14:28:48.958076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.164 [2024-11-06 14:28:48.958091] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:33.164 [2024-11-06 14:28:48.958103] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:33.164 [2024-11-06 14:28:48.958115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87264 len:8 PRP1 0x0 PRP2 0x0 00:24:33.164 [2024-11-06 14:28:48.958131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.164 [2024-11-06 14:28:48.958146] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:33.165 [2024-11-06 14:28:48.958158] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:33.165 [2024-11-06 14:28:48.958170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87272 len:8 PRP1 0x0 PRP2 0x0 00:24:33.165 [2024-11-06 14:28:48.958186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.165 [2024-11-06 14:28:48.958202] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:33.165 [2024-11-06 14:28:48.958213] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:33.165 [2024-11-06 14:28:48.958226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87280 len:8 PRP1 0x0 PRP2 0x0 00:24:33.165 [2024-11-06 14:28:48.958241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.165 [2024-11-06 14:28:48.958256] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:33.165 [2024-11-06 14:28:48.958268] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:33.165 [2024-11-06 14:28:48.958280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87288 len:8 PRP1 0x0 PRP2 0x0 00:24:33.165 [2024-11-06 14:28:48.958296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.165 [2024-11-06 14:28:48.958311] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:33.165 [2024-11-06 14:28:48.958322] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:33.165 [2024-11-06 14:28:48.958335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87296 len:8 PRP1 0x0 PRP2 0x0 00:24:33.165 [2024-11-06 14:28:48.958350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.165 [2024-11-06 14:28:48.958365] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:33.165 [2024-11-06 14:28:48.958377] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:33.165 [2024-11-06 14:28:48.958390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87304 len:8 PRP1 0x0 PRP2 0x0 00:24:33.165 [2024-11-06 14:28:48.958405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.165 [2024-11-06 14:28:48.958420] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:33.165 [2024-11-06 14:28:48.958432] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:33.165 [2024-11-06 14:28:48.958450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87312 len:8 PRP1 0x0 PRP2 0x0 00:24:33.165 [2024-11-06 14:28:48.958475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.165 [2024-11-06 14:28:48.958793] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 00:24:33.165 [2024-11-06 14:28:48.958862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:33.165 [2024-11-06 14:28:48.958883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.165 [2024-11-06 14:28:48.958902] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:33.165 [2024-11-06 14:28:48.958918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.165 [2024-11-06 14:28:48.958935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:33.165 [2024-11-06 14:28:48.958951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.165 [2024-11-06 14:28:48.958969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:33.165 [2024-11-06 14:28:48.958985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.165 [2024-11-06 14:28:48.959002] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:24:33.165 [2024-11-06 14:28:48.959048] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:24:33.165 [2024-11-06 14:28:48.962037] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:24:33.165 [2024-11-06 14:28:48.985001] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:24:33.165 9115.80 IOPS, 35.61 MiB/s [2024-11-06T14:29:00.800Z] 9118.50 IOPS, 35.62 MiB/s [2024-11-06T14:29:00.800Z] 9089.57 IOPS, 35.51 MiB/s [2024-11-06T14:29:00.800Z] 9046.88 IOPS, 35.34 MiB/s [2024-11-06T14:29:00.800Z] 9016.33 IOPS, 35.22 MiB/s [2024-11-06T14:29:00.800Z] [2024-11-06 14:28:53.444999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.165 [2024-11-06 14:28:53.445055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.165 [2024-11-06 14:28:53.445087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:22104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.165 [2024-11-06 14:28:53.445104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.165 [2024-11-06 14:28:53.445123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:22112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.165 [2024-11-06 14:28:53.445140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.165 [2024-11-06 14:28:53.445158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:22120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.165 [2024-11-06 14:28:53.445174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.165 [2024-11-06 14:28:53.445192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:22128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.165 [2024-11-06 14:28:53.445208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.165 [2024-11-06 14:28:53.445252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:22136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.165 [2024-11-06 14:28:53.445269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.165 [2024-11-06 14:28:53.445287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:22144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.165 [2024-11-06 14:28:53.445318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.165 [2024-11-06 14:28:53.445336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.165 [2024-11-06 14:28:53.445353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.165 [2024-11-06 14:28:53.445370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.165 [2024-11-06 14:28:53.445386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.165 [2024-11-06 14:28:53.445404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:22488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.165 [2024-11-06 14:28:53.445420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.165 [2024-11-06 14:28:53.445437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.165 [2024-11-06 14:28:53.445453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.165 [2024-11-06 14:28:53.445471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:22504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.165 [2024-11-06 14:28:53.445486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.165 [2024-11-06 14:28:53.445503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:22512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.165 [2024-11-06 14:28:53.445520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.165 [2024-11-06 14:28:53.445537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:22520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.165 [2024-11-06 14:28:53.445553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.165 [2024-11-06 14:28:53.445571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.165 [2024-11-06 14:28:53.445588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.165 [2024-11-06 14:28:53.445607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:22152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.165 [2024-11-06 14:28:53.445623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.165 [2024-11-06 14:28:53.445641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:22160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.165 [2024-11-06 14:28:53.445658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.165 [2024-11-06 14:28:53.445676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.165 [2024-11-06 14:28:53.445699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.165 [2024-11-06 14:28:53.445717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.165 [2024-11-06 14:28:53.445734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.165 [2024-11-06 14:28:53.445753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:22184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.165 [2024-11-06 14:28:53.445769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.165 [2024-11-06 14:28:53.445787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:22192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.165 [2024-11-06 14:28:53.445803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.166 [2024-11-06 14:28:53.445821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:22200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.166 [2024-11-06 14:28:53.445850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.166 [2024-11-06 14:28:53.445869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:22208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.166 [2024-11-06 14:28:53.445885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.166 [2024-11-06 14:28:53.445903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:22216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.166 [2024-11-06 14:28:53.445919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.166 [2024-11-06 14:28:53.445937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:22224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.166 [2024-11-06 14:28:53.445953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.166 [2024-11-06 14:28:53.445971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:22232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.166 [2024-11-06 14:28:53.445988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.166 [2024-11-06 14:28:53.446007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:22240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.166 [2024-11-06 14:28:53.446023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.166 [2024-11-06 14:28:53.446041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:22248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.166 [2024-11-06 14:28:53.446057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.166 [2024-11-06 14:28:53.446074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:22256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.166 [2024-11-06 14:28:53.446091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.166 [2024-11-06 14:28:53.446109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:22264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.166 [2024-11-06 14:28:53.446125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.166 [2024-11-06 14:28:53.446149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:22272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.166 [2024-11-06 14:28:53.446180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.166 [2024-11-06 14:28:53.446197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:22536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.166 [2024-11-06 14:28:53.446214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.166 [2024-11-06 14:28:53.446232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:22544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.166 [2024-11-06 14:28:53.446249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.166 [2024-11-06 14:28:53.446267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.166 [2024-11-06 14:28:53.446283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.166 [2024-11-06 14:28:53.446301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:22560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.166 [2024-11-06 14:28:53.446317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.166 [2024-11-06 14:28:53.446335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:22568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.166 [2024-11-06 14:28:53.446350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.166 [2024-11-06 14:28:53.446368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:22576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.166 [2024-11-06 14:28:53.446384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.166 [2024-11-06 14:28:53.446401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.166 [2024-11-06 14:28:53.446417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.166 [2024-11-06 14:28:53.446434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.166 [2024-11-06 14:28:53.446450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.166 [2024-11-06 14:28:53.446476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.166 [2024-11-06 14:28:53.446494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.166 [2024-11-06 14:28:53.446511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.166 [2024-11-06 14:28:53.446528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.166 [2024-11-06 14:28:53.446546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:22616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.166 [2024-11-06 14:28:53.446562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.166 [2024-11-06 14:28:53.446580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:22624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.166 [2024-11-06 14:28:53.446597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.166 [2024-11-06 14:28:53.446620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.166 [2024-11-06 14:28:53.446637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.166 [2024-11-06 14:28:53.446654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:22640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.166 [2024-11-06 14:28:53.446671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.166 [2024-11-06 14:28:53.446688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:22648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.166 [2024-11-06 14:28:53.446704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.166 [2024-11-06 14:28:53.446722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:22656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.166 [2024-11-06 14:28:53.446738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.166 [2024-11-06 14:28:53.446756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:22280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.166 [2024-11-06 14:28:53.446772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.166 [2024-11-06 14:28:53.446789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:22288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.166 [2024-11-06 14:28:53.446806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.166 [2024-11-06 14:28:53.446824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:22296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.166 [2024-11-06 14:28:53.446851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.166 [2024-11-06 14:28:53.446869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:22304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.166 [2024-11-06 14:28:53.446886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.166 [2024-11-06 14:28:53.446903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:22312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.166 [2024-11-06 14:28:53.446919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.166 [2024-11-06 14:28:53.446937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:22320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.166 [2024-11-06 14:28:53.446953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.166 [2024-11-06 14:28:53.446976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:22328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.166 [2024-11-06 14:28:53.446992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.166 [2024-11-06 14:28:53.447010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:22336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.166 [2024-11-06 14:28:53.447026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.166 [2024-11-06 14:28:53.447044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:22664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.166 [2024-11-06 14:28:53.447065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.166 [2024-11-06 14:28:53.447083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:22672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.166 [2024-11-06 14:28:53.447101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.166 [2024-11-06 14:28:53.447119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:22680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.166 [2024-11-06 14:28:53.447136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.166 [2024-11-06 14:28:53.447154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:22688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.167 [2024-11-06 14:28:53.447171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.167 [2024-11-06 14:28:53.447188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:22696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.167 [2024-11-06 14:28:53.447205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.167 [2024-11-06 14:28:53.447222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:22704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.167 [2024-11-06 14:28:53.447238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.167 [2024-11-06 14:28:53.447256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:22712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.167 [2024-11-06 14:28:53.447272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.167 [2024-11-06 14:28:53.447290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:22720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.167 [2024-11-06 14:28:53.447306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.167 [2024-11-06 14:28:53.447324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:22728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.167 [2024-11-06 14:28:53.447357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.167 [2024-11-06 14:28:53.447375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:22736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.167 [2024-11-06 14:28:53.447393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.167 [2024-11-06 14:28:53.447412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:22744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.167 [2024-11-06 14:28:53.447429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.167 [2024-11-06 14:28:53.447448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:22752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.167 [2024-11-06 14:28:53.447465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.167 [2024-11-06 14:28:53.447483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:22760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.167 [2024-11-06 14:28:53.447500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.167 [2024-11-06 14:28:53.447524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:22768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.167 [2024-11-06 14:28:53.447542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.167 [2024-11-06 14:28:53.447571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:22776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.167 [2024-11-06 14:28:53.447587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.167 [2024-11-06 14:28:53.447605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:22784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.167 [2024-11-06 14:28:53.447633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.167 [2024-11-06 14:28:53.447652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:22792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.167 [2024-11-06 14:28:53.447668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.167 [2024-11-06 14:28:53.447686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.167 [2024-11-06 14:28:53.447702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.167 [2024-11-06 14:28:53.447720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.167 [2024-11-06 14:28:53.447737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.167 [2024-11-06 14:28:53.447755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:22816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.167 [2024-11-06 14:28:53.447772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.167 [2024-11-06 14:28:53.447791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:22344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.167 [2024-11-06 14:28:53.447807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.167 [2024-11-06 14:28:53.447826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.167 [2024-11-06 14:28:53.447843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.167 [2024-11-06 14:28:53.447860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:22360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.167 [2024-11-06 14:28:53.447888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.167 [2024-11-06 14:28:53.447905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:22368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.167 [2024-11-06 14:28:53.447921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.167 [2024-11-06 14:28:53.447939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:22376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.167 [2024-11-06 14:28:53.447955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.167 [2024-11-06 14:28:53.447973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:22384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.167 [2024-11-06 14:28:53.447996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.167 [2024-11-06 14:28:53.448015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:22392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.167 [2024-11-06 14:28:53.448030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.167 [2024-11-06 14:28:53.448049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:22400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.167 [2024-11-06 14:28:53.448065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.167 [2024-11-06 14:28:53.448083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:22824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.167 [2024-11-06 14:28:53.448099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.167 [2024-11-06 14:28:53.448117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.167 [2024-11-06 14:28:53.448133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.167 [2024-11-06 14:28:53.448152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:22840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.167 [2024-11-06 14:28:53.448168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.167 [2024-11-06 14:28:53.448186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:22848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.167 [2024-11-06 14:28:53.448203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.167 [2024-11-06 14:28:53.448221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.167 [2024-11-06 14:28:53.448237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.167 [2024-11-06 14:28:53.448254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:22864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.167 [2024-11-06 14:28:53.448271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.167 [2024-11-06 14:28:53.448288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:22872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.167 [2024-11-06 14:28:53.448304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.167 [2024-11-06 14:28:53.448322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:22880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.167 [2024-11-06 14:28:53.448338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.168 [2024-11-06 14:28:53.448356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:22888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.168 [2024-11-06 14:28:53.448372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.168 [2024-11-06 14:28:53.448390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:22896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.168 [2024-11-06 14:28:53.448406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.168 [2024-11-06 14:28:53.448424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:22904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.168 [2024-11-06 14:28:53.448446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.168 [2024-11-06 14:28:53.448464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.168 [2024-11-06 14:28:53.448481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.168 [2024-11-06 14:28:53.448499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.168 [2024-11-06 14:28:53.448515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.168 [2024-11-06 14:28:53.448533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:22928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.168 [2024-11-06 14:28:53.448549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.168 [2024-11-06 14:28:53.448566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:22936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.168 [2024-11-06 14:28:53.448582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.168 [2024-11-06 14:28:53.448600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:22944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.168 [2024-11-06 14:28:53.448616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.168 [2024-11-06 14:28:53.448634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.168 [2024-11-06 14:28:53.448651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.168 [2024-11-06 14:28:53.448669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:22960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.168 [2024-11-06 14:28:53.448684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.168 [2024-11-06 14:28:53.448706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:22408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.168 [2024-11-06 14:28:53.448723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.168 [2024-11-06 14:28:53.448741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:22416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.168 [2024-11-06 14:28:53.448758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.168 [2024-11-06 14:28:53.448776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.168 [2024-11-06 14:28:53.448793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.168 [2024-11-06 14:28:53.448810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:22432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.168 [2024-11-06 14:28:53.448827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.168 [2024-11-06 14:28:53.448855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:22440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.168 [2024-11-06 14:28:53.448872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.168 [2024-11-06 14:28:53.448895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:22448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.168 [2024-11-06 14:28:53.448912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.168 [2024-11-06 14:28:53.448930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:22456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.168 [2024-11-06 14:28:53.448946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.168 [2024-11-06 14:28:53.448980] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002c180 is same with the state(6) to be set 00:24:33.168 [2024-11-06 14:28:53.449002] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:33.168 [2024-11-06 14:28:53.449016] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:33.168 [2024-11-06 14:28:53.449030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22464 len:8 PRP1 0x0 PRP2 0x0 00:24:33.168 [2024-11-06 14:28:53.449048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.168 [2024-11-06 14:28:53.449067] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:33.168 [2024-11-06 14:28:53.449080] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:33.168 [2024-11-06 14:28:53.449093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22968 len:8 PRP1 0x0 PRP2 0x0 00:24:33.168 [2024-11-06 14:28:53.449110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.168 [2024-11-06 14:28:53.449126] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:33.168 [2024-11-06 14:28:53.449139] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:33.168 [2024-11-06 14:28:53.449152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:8 PRP1 0x0 PRP2 0x0 00:24:33.168 [2024-11-06 14:28:53.449169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.168 [2024-11-06 14:28:53.449186] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:33.168 [2024-11-06 14:28:53.449198] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:33.168 [2024-11-06 14:28:53.449212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22984 len:8 PRP1 0x0 PRP2 0x0 00:24:33.168 [2024-11-06 14:28:53.449228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.168 [2024-11-06 14:28:53.449244] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:33.168 [2024-11-06 14:28:53.449258] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:33.168 [2024-11-06 14:28:53.449271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22992 len:8 PRP1 0x0 PRP2 0x0 00:24:33.168 [2024-11-06 14:28:53.449287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.168 [2024-11-06 14:28:53.449304] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:33.168 [2024-11-06 14:28:53.449316] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:33.168 [2024-11-06 14:28:53.449330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23000 len:8 PRP1 0x0 PRP2 0x0 00:24:33.168 [2024-11-06 14:28:53.449347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.168 [2024-11-06 14:28:53.449364] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:33.168 [2024-11-06 14:28:53.449386] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:33.168 [2024-11-06 14:28:53.449400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23008 len:8 PRP1 0x0 PRP2 0x0 00:24:33.168 [2024-11-06 14:28:53.449417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.168 [2024-11-06 14:28:53.449433] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:33.168 [2024-11-06 14:28:53.449445] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:33.168 [2024-11-06 14:28:53.449458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23016 len:8 PRP1 0x0 PRP2 0x0 00:24:33.168 [2024-11-06 14:28:53.449474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.168 [2024-11-06 14:28:53.449491] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:33.168 [2024-11-06 14:28:53.449503] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:33.168 [2024-11-06 14:28:53.449516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23024 len:8 PRP1 0x0 PRP2 0x0 00:24:33.168 [2024-11-06 14:28:53.449533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.168 [2024-11-06 14:28:53.449549] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:33.168 [2024-11-06 14:28:53.449561] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:33.168 [2024-11-06 14:28:53.449574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23032 len:8 PRP1 0x0 PRP2 0x0 00:24:33.168 [2024-11-06 14:28:53.449591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.168 [2024-11-06 14:28:53.449607] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:33.168 [2024-11-06 14:28:53.449619] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:33.168 [2024-11-06 14:28:53.449632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:8 PRP1 0x0 PRP2 0x0 00:24:33.168 [2024-11-06 14:28:53.449649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.168 [2024-11-06 14:28:53.449677] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:33.168 [2024-11-06 14:28:53.449688] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:33.168 [2024-11-06 14:28:53.449701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23048 len:8 PRP1 0x0 PRP2 0x0 00:24:33.168 [2024-11-06 14:28:53.449716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.168 [2024-11-06 14:28:53.449731] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:33.168 [2024-11-06 14:28:53.449744] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:33.169 [2024-11-06 14:28:53.449756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23056 len:8 PRP1 0x0 PRP2 0x0 00:24:33.169 [2024-11-06 14:28:53.449772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.169 [2024-11-06 14:28:53.449799] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:33.169 [2024-11-06 14:28:53.449817] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:33.169 [2024-11-06 14:28:53.449830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23064 len:8 PRP1 0x0 PRP2 0x0 00:24:33.169 [2024-11-06 14:28:53.449846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.169 [2024-11-06 14:28:53.449876] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:33.169 [2024-11-06 14:28:53.449888] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:33.169 [2024-11-06 14:28:53.449901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:8 PRP1 0x0 PRP2 0x0 00:24:33.169 [2024-11-06 14:28:53.449916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.169 [2024-11-06 14:28:53.449931] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:33.169 [2024-11-06 14:28:53.449943] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:33.169 [2024-11-06 14:28:53.449956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23080 len:8 PRP1 0x0 PRP2 0x0 00:24:33.169 [2024-11-06 14:28:53.449971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.169 [2024-11-06 14:28:53.449987] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:33.169 [2024-11-06 14:28:53.449998] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:33.169 [2024-11-06 14:28:53.450010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23088 len:8 PRP1 0x0 PRP2 0x0 00:24:33.169 [2024-11-06 14:28:53.450026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.169 [2024-11-06 14:28:53.450041] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:33.169 [2024-11-06 14:28:53.450053] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:33.169 [2024-11-06 14:28:53.450065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23096 len:8 PRP1 0x0 PRP2 0x0 00:24:33.169 [2024-11-06 14:28:53.450081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.169 [2024-11-06 14:28:53.450096] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:33.169 [2024-11-06 14:28:53.450108] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:33.169 [2024-11-06 14:28:53.450136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23104 len:8 PRP1 0x0 PRP2 0x0 00:24:33.169 [2024-11-06 14:28:53.450152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.169 [2024-11-06 14:28:53.450168] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:33.169 [2024-11-06 14:28:53.450181] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:33.169 [2024-11-06 14:28:53.450194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23112 len:8 PRP1 0x0 PRP2 0x0 00:24:33.169 [2024-11-06 14:28:53.450210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.169 [2024-11-06 14:28:53.450572] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 00:24:33.169 [2024-11-06 14:28:53.450642] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:33.169 [2024-11-06 14:28:53.450664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.169 [2024-11-06 14:28:53.450685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:33.169 [2024-11-06 14:28:53.450702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.169 [2024-11-06 14:28:53.450727] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:33.169 [2024-11-06 14:28:53.450748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.169 [2024-11-06 14:28:53.450767] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:33.169 [2024-11-06 14:28:53.450784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.169 [2024-11-06 14:28:53.450801] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:24:33.169 [2024-11-06 14:28:53.450873] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:24:33.169 [2024-11-06 14:28:53.453978] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:24:33.169 [2024-11-06 14:28:53.483413] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:24:33.169 8857.90 IOPS, 34.60 MiB/s [2024-11-06T14:29:00.804Z] 8750.09 IOPS, 34.18 MiB/s [2024-11-06T14:29:00.804Z] 8660.25 IOPS, 33.83 MiB/s [2024-11-06T14:29:00.804Z] 8585.46 IOPS, 33.54 MiB/s [2024-11-06T14:29:00.804Z] 8522.50 IOPS, 33.29 MiB/s [2024-11-06T14:29:00.804Z] 8468.47 IOPS, 33.08 MiB/s 00:24:33.169 Latency(us) 00:24:33.169 [2024-11-06T14:29:00.804Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:33.169 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:33.169 Verification LBA range: start 0x0 length 0x4000 00:24:33.169 NVMe0n1 : 15.01 8468.44 33.08 253.99 0.00 14647.74 509.94 20634.63 00:24:33.169 [2024-11-06T14:29:00.804Z] =================================================================================================================== 00:24:33.169 [2024-11-06T14:29:00.804Z] Total : 8468.44 33.08 253.99 0.00 14647.74 509.94 20634.63 00:24:33.169 Received shutdown signal, test time was about 15.000000 seconds 00:24:33.169 00:24:33.169 Latency(us) 00:24:33.169 [2024-11-06T14:29:00.804Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:33.169 [2024-11-06T14:29:00.804Z] =================================================================================================================== 00:24:33.169 [2024-11-06T14:29:00.804Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:33.169 14:29:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:24:33.169 14:29:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:24:33.169 14:29:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:24:33.169 14:29:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=82307 00:24:33.169 14:29:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:24:33.169 14:29:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 82307 /var/tmp/bdevperf.sock 00:24:33.169 14:29:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 82307 ']' 00:24:33.169 14:29:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:33.169 14:29:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:33.169 14:29:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:33.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:33.169 14:29:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:33.169 14:29:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:34.107 14:29:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:34.107 14:29:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:24:34.107 14:29:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:24:34.366 [2024-11-06 14:29:01.805979] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:24:34.366 14:29:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:24:34.627 [2024-11-06 14:29:02.005964] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:24:34.627 14:29:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:34.889 NVMe0n1 00:24:34.889 14:29:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:35.148 00:24:35.149 14:29:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:35.408 00:24:35.408 14:29:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:35.408 14:29:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:24:35.667 14:29:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:35.926 14:29:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:24:39.215 14:29:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:39.215 14:29:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:24:39.215 14:29:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=82384 00:24:39.215 14:29:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:39.215 14:29:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 82384 00:24:40.189 { 00:24:40.189 "results": [ 00:24:40.189 { 00:24:40.189 "job": "NVMe0n1", 00:24:40.189 "core_mask": "0x1", 00:24:40.189 "workload": "verify", 00:24:40.189 "status": "finished", 00:24:40.189 "verify_range": { 00:24:40.189 "start": 0, 00:24:40.189 "length": 16384 00:24:40.189 }, 00:24:40.189 "queue_depth": 128, 00:24:40.189 "io_size": 4096, 00:24:40.189 "runtime": 1.011837, 00:24:40.189 "iops": 8409.457254478735, 00:24:40.189 "mibps": 32.84944240030756, 00:24:40.189 "io_failed": 0, 00:24:40.189 "io_timeout": 0, 00:24:40.189 "avg_latency_us": 15145.001665800586, 00:24:40.189 "min_latency_us": 1572.6008032128514, 00:24:40.189 "max_latency_us": 16318.20080321285 00:24:40.189 } 00:24:40.189 ], 00:24:40.189 "core_count": 1 00:24:40.189 } 00:24:40.189 14:29:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:40.189 [2024-11-06 14:29:00.709263] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:24:40.189 [2024-11-06 14:29:00.709395] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82307 ] 00:24:40.189 [2024-11-06 14:29:00.891617] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:40.189 [2024-11-06 14:29:01.038807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:40.189 [2024-11-06 14:29:01.276456] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:40.189 [2024-11-06 14:29:03.292010] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:24:40.189 [2024-11-06 14:29:03.292209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.189 [2024-11-06 14:29:03.292238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.189 [2024-11-06 14:29:03.292266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.189 [2024-11-06 14:29:03.292284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.189 [2024-11-06 14:29:03.292310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.189 [2024-11-06 14:29:03.292329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.189 [2024-11-06 14:29:03.292351] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.189 [2024-11-06 14:29:03.292368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.189 [2024-11-06 14:29:03.292398] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:24:40.189 [2024-11-06 14:29:03.292480] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:24:40.189 [2024-11-06 14:29:03.292523] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:24:40.189 [2024-11-06 14:29:03.304625] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:24:40.189 Running I/O for 1 seconds... 00:24:40.189 8373.00 IOPS, 32.71 MiB/s 00:24:40.189 Latency(us) 00:24:40.189 [2024-11-06T14:29:07.824Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:40.189 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:40.189 Verification LBA range: start 0x0 length 0x4000 00:24:40.189 NVMe0n1 : 1.01 8409.46 32.85 0.00 0.00 15145.00 1572.60 16318.20 00:24:40.189 [2024-11-06T14:29:07.824Z] =================================================================================================================== 00:24:40.189 [2024-11-06T14:29:07.824Z] Total : 8409.46 32.85 0.00 0.00 15145.00 1572.60 16318.20 00:24:40.189 14:29:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:40.189 14:29:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:24:40.448 14:29:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:40.707 14:29:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:40.707 14:29:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:24:40.966 14:29:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:41.225 14:29:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:24:44.560 14:29:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:24:44.560 14:29:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:44.560 14:29:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 82307 00:24:44.560 14:29:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 82307 ']' 00:24:44.560 14:29:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 82307 00:24:44.560 14:29:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:24:44.560 14:29:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:44.560 14:29:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 82307 00:24:44.560 killing process with pid 82307 00:24:44.560 14:29:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:44.560 14:29:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:44.560 14:29:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 82307' 00:24:44.560 14:29:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 82307 00:24:44.560 14:29:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 82307 00:24:45.497 14:29:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:24:45.756 14:29:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:45.756 14:29:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:24:45.756 14:29:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:45.756 14:29:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:24:45.756 14:29:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:45.756 14:29:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:24:45.756 14:29:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:45.756 14:29:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:24:45.756 14:29:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:45.756 14:29:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:45.756 rmmod nvme_tcp 00:24:45.756 rmmod nvme_fabrics 00:24:46.015 rmmod nvme_keyring 00:24:46.015 14:29:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:46.015 14:29:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:24:46.015 14:29:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:24:46.015 14:29:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 82051 ']' 00:24:46.015 14:29:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 82051 00:24:46.015 14:29:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 82051 ']' 00:24:46.015 14:29:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 82051 00:24:46.015 14:29:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:24:46.015 14:29:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:46.015 14:29:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 82051 00:24:46.015 killing process with pid 82051 00:24:46.015 14:29:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:24:46.015 14:29:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:24:46.015 14:29:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 82051' 00:24:46.015 14:29:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 82051 00:24:46.015 14:29:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 82051 00:24:47.393 14:29:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:47.393 14:29:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:47.393 14:29:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:47.393 14:29:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:24:47.393 14:29:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:24:47.393 14:29:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:47.393 14:29:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:24:47.393 14:29:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:47.393 14:29:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:47.393 14:29:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:47.393 14:29:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:47.393 14:29:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:47.393 14:29:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:47.393 14:29:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:47.393 14:29:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:47.393 14:29:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:47.393 14:29:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:47.393 14:29:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:47.393 14:29:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:47.652 14:29:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:47.652 14:29:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:47.652 14:29:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:47.652 14:29:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:47.652 14:29:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:47.652 14:29:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:47.652 14:29:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:47.652 14:29:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:24:47.652 00:24:47.652 real 0m35.726s 00:24:47.652 user 2m12.918s 00:24:47.652 sys 0m6.991s 00:24:47.652 14:29:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:47.652 14:29:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:47.652 ************************************ 00:24:47.652 END TEST nvmf_failover 00:24:47.652 ************************************ 00:24:47.652 14:29:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:47.652 14:29:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:47.652 14:29:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:47.652 14:29:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.652 ************************************ 00:24:47.652 START TEST nvmf_host_discovery 00:24:47.652 ************************************ 00:24:47.652 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:47.913 * Looking for test storage... 00:24:47.913 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:47.913 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:47.913 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:24:47.913 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:47.913 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:47.913 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:47.913 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:47.913 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:47.913 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:24:47.913 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:24:47.913 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:24:47.913 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:24:47.913 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:24:47.913 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:24:47.913 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:24:47.913 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:47.913 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:24:47.913 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:24:47.913 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:47.913 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:47.913 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:24:47.913 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:24:47.913 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:47.913 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:24:47.913 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:24:47.913 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:24:47.913 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:24:47.913 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:47.913 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:24:47.913 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:24:47.913 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:47.913 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:47.913 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:24:47.913 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:47.913 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:47.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:47.913 --rc genhtml_branch_coverage=1 00:24:47.913 --rc genhtml_function_coverage=1 00:24:47.913 --rc genhtml_legend=1 00:24:47.913 --rc geninfo_all_blocks=1 00:24:47.913 --rc geninfo_unexecuted_blocks=1 00:24:47.913 00:24:47.913 ' 00:24:47.913 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:47.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:47.913 --rc genhtml_branch_coverage=1 00:24:47.913 --rc genhtml_function_coverage=1 00:24:47.913 --rc genhtml_legend=1 00:24:47.913 --rc geninfo_all_blocks=1 00:24:47.913 --rc geninfo_unexecuted_blocks=1 00:24:47.913 00:24:47.913 ' 00:24:47.913 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:47.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:47.913 --rc genhtml_branch_coverage=1 00:24:47.913 --rc genhtml_function_coverage=1 00:24:47.913 --rc genhtml_legend=1 00:24:47.913 --rc geninfo_all_blocks=1 00:24:47.913 --rc geninfo_unexecuted_blocks=1 00:24:47.913 00:24:47.913 ' 00:24:47.913 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:47.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:47.913 --rc genhtml_branch_coverage=1 00:24:47.913 --rc genhtml_function_coverage=1 00:24:47.913 --rc genhtml_legend=1 00:24:47.913 --rc geninfo_all_blocks=1 00:24:47.913 --rc geninfo_unexecuted_blocks=1 00:24:47.913 00:24:47.913 ' 00:24:47.913 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:47.913 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:24:47.913 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:47.913 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:47.913 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:47.913 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:47.913 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:47.913 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:47.913 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:47.913 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:47.913 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:47.913 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:47.913 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:24:47.913 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:24:47.913 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:47.913 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:47.913 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:47.913 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:47.913 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:47.913 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:24:47.913 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:47.913 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:47.913 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:47.914 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.914 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.914 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.914 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:24:47.914 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.914 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:24:47.914 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:47.914 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:47.914 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:47.914 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:47.914 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:47.914 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:47.914 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:47.914 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:47.914 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:47.914 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:47.914 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:24:47.914 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:24:47.914 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:24:47.914 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:24:47.914 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:24:47.914 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:24:47.914 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:24:47.914 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:47.914 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:47.914 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:47.914 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:47.914 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:47.914 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:47.914 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:47.914 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:47.914 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:24:47.914 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:24:47.914 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:24:47.914 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:24:47.914 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:24:47.914 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:24:47.914 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:47.914 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:47.914 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:47.914 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:47.914 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:47.914 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:47.914 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:47.914 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:47.914 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:47.914 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:47.914 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:47.914 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:47.914 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:47.914 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:47.914 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:47.914 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:47.914 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:48.173 Cannot find device "nvmf_init_br" 00:24:48.173 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:24:48.173 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:48.173 Cannot find device "nvmf_init_br2" 00:24:48.173 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:24:48.173 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:48.173 Cannot find device "nvmf_tgt_br" 00:24:48.173 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 00:24:48.173 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:48.173 Cannot find device "nvmf_tgt_br2" 00:24:48.173 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 00:24:48.173 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:48.173 Cannot find device "nvmf_init_br" 00:24:48.173 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:24:48.173 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:48.173 Cannot find device "nvmf_init_br2" 00:24:48.173 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:24:48.173 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:48.173 Cannot find device "nvmf_tgt_br" 00:24:48.173 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 00:24:48.173 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:48.173 Cannot find device "nvmf_tgt_br2" 00:24:48.173 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 00:24:48.173 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:48.173 Cannot find device "nvmf_br" 00:24:48.173 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 00:24:48.173 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:48.173 Cannot find device "nvmf_init_if" 00:24:48.173 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 00:24:48.173 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:48.173 Cannot find device "nvmf_init_if2" 00:24:48.173 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 00:24:48.173 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:48.173 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:48.173 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 00:24:48.173 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:48.173 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:48.173 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 00:24:48.173 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:48.173 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:48.173 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:48.173 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:48.432 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:48.432 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:48.432 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:48.432 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:48.432 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:48.432 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:48.432 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:24:48.432 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:24:48.432 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:24:48.432 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:24:48.432 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:24:48.432 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:24:48.432 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:24:48.432 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:48.432 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:48.432 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:48.432 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:24:48.432 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:24:48.432 14:29:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:24:48.432 14:29:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:24:48.432 14:29:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:48.432 14:29:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:48.432 14:29:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:48.432 14:29:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:24:48.692 14:29:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:24:48.692 14:29:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:24:48.692 14:29:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:48.692 14:29:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:48.692 14:29:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:24:48.692 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:48.692 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.109 ms 00:24:48.692 00:24:48.692 --- 10.0.0.3 ping statistics --- 00:24:48.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:48.692 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:24:48.692 14:29:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:24:48.692 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:48.692 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.121 ms 00:24:48.692 00:24:48.692 --- 10.0.0.4 ping statistics --- 00:24:48.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:48.692 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:24:48.692 14:29:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:48.692 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:48.692 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:24:48.692 00:24:48.692 --- 10.0.0.1 ping statistics --- 00:24:48.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:48.692 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:24:48.692 14:29:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:24:48.692 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:48.692 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.098 ms 00:24:48.692 00:24:48.692 --- 10.0.0.2 ping statistics --- 00:24:48.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:48.692 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:24:48.692 14:29:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:48.692 14:29:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@461 -- # return 0 00:24:48.692 14:29:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:48.692 14:29:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:48.692 14:29:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:48.692 14:29:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:48.692 14:29:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:48.692 14:29:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:48.692 14:29:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:48.692 14:29:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:24:48.692 14:29:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:48.692 14:29:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:48.692 14:29:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:48.692 14:29:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:48.692 14:29:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=82739 00:24:48.692 14:29:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 82739 00:24:48.692 14:29:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 82739 ']' 00:24:48.692 14:29:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:48.692 14:29:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:48.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:48.692 14:29:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:48.692 14:29:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:48.692 14:29:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:48.692 [2024-11-06 14:29:16.266882] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:24:48.692 [2024-11-06 14:29:16.267005] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:48.954 [2024-11-06 14:29:16.447225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:48.954 [2024-11-06 14:29:16.569805] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:48.954 [2024-11-06 14:29:16.569885] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:48.954 [2024-11-06 14:29:16.569903] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:48.954 [2024-11-06 14:29:16.569926] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:48.954 [2024-11-06 14:29:16.569941] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:48.954 [2024-11-06 14:29:16.571255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:49.227 [2024-11-06 14:29:16.785565] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:49.793 14:29:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:49.793 14:29:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:24:49.793 14:29:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:49.793 14:29:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:49.793 14:29:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:49.793 14:29:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:49.793 14:29:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:49.793 14:29:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.793 14:29:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:49.793 [2024-11-06 14:29:17.204182] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:49.793 14:29:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.793 14:29:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:24:49.793 14:29:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.793 14:29:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:49.793 [2024-11-06 14:29:17.216395] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:24:49.793 14:29:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.793 14:29:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:24:49.793 14:29:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.793 14:29:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:49.793 null0 00:24:49.793 14:29:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.793 14:29:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:24:49.793 14:29:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.793 14:29:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:49.793 null1 00:24:49.793 14:29:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.793 14:29:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:24:49.793 14:29:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.793 14:29:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:49.793 14:29:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.793 14:29:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=82768 00:24:49.793 14:29:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:24:49.793 14:29:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 82768 /tmp/host.sock 00:24:49.793 14:29:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 82768 ']' 00:24:49.793 14:29:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:24:49.793 14:29:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:49.793 14:29:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:49.793 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:49.793 14:29:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:49.793 14:29:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:49.793 [2024-11-06 14:29:17.370532] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:24:49.793 [2024-11-06 14:29:17.370888] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82768 ] 00:24:50.051 [2024-11-06 14:29:17.552668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:50.308 [2024-11-06 14:29:17.694770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:50.308 [2024-11-06 14:29:17.930300] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:50.566 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:50.566 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:24:50.566 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:50.566 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:24:50.566 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.566 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.566 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.566 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:24:50.566 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.566 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.825 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.825 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:24:50.825 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:24:50.825 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:50.825 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:50.825 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.825 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:50.825 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.825 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:50.825 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.825 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:24:50.825 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:24:50.825 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:50.825 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:50.825 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.825 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:50.825 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.825 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:50.825 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.825 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:24:50.825 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:24:50.825 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.825 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.825 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.825 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:24:50.825 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:50.825 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:50.825 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:50.825 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.825 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:50.825 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.825 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.825 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:24:50.825 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:24:50.825 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:50.825 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.825 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.825 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:50.825 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:50.825 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:50.825 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.825 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:24:50.825 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:24:50.825 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.825 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.825 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.825 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:24:50.825 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:50.825 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.825 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.825 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:50.825 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:50.825 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:50.825 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.084 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:24:51.084 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:24:51.084 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:51.084 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.084 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:51.084 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:51.084 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:51.084 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:51.084 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.084 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:24:51.084 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:24:51.084 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.084 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:51.084 [2024-11-06 14:29:18.528305] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:51.084 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.084 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:24:51.084 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:51.084 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:51.084 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:51.084 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.084 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:51.084 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:51.084 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.084 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:24:51.084 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:24:51.084 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:51.084 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:51.084 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:51.084 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:51.084 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.084 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:51.084 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.084 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:24:51.084 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:24:51.084 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:51.084 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:51.084 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:51.084 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:24:51.084 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:24:51.084 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:51.084 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:24:51.084 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:51.084 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:51.084 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.084 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:51.084 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.084 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:51.084 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:24:51.085 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:24:51.085 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:24:51.085 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:24:51.085 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.085 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:51.085 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.085 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:51.085 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:51.085 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:24:51.085 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:24:51.085 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:51.085 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:24:51.085 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:51.085 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.085 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:51.085 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:51.085 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:51.085 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:51.085 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.343 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == \n\v\m\e\0 ]] 00:24:51.343 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:24:51.601 [2024-11-06 14:29:19.205612] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:24:51.601 [2024-11-06 14:29:19.205668] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:24:51.601 [2024-11-06 14:29:19.205707] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:24:51.602 [2024-11-06 14:29:19.211668] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:24:51.860 [2024-11-06 14:29:19.274157] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:24:51.860 [2024-11-06 14:29:19.275730] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x61500002b280:1 started. 00:24:51.860 [2024-11-06 14:29:19.278008] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:24:51.860 [2024-11-06 14:29:19.278202] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:24:51.860 [2024-11-06 14:29:19.284423] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x61500002b280 was disconnected and freed. delete nvme_qpair. 00:24:52.119 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:24:52.119 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:52.119 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:24:52.119 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:52.119 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:52.119 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.119 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:52.119 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:52.119 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:52.378 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.378 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:52.378 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:24:52.378 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:52.378 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:52.378 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:24:52.378 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:24:52.378 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:24:52.378 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:24:52.378 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:52.378 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:52.378 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:52.378 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:52.378 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.378 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:52.378 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.378 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:24:52.378 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:24:52.378 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:52.378 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:52.378 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:24:52.378 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:24:52.378 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:24:52.378 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:24:52.378 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:52.378 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:52.378 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.378 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:52.378 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:52.378 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:52.378 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.378 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 == \4\4\2\0 ]] 00:24:52.378 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:24:52.378 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:24:52.378 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:52.378 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:52.378 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:52.378 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:24:52.378 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:24:52.378 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:52.378 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:24:52.378 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:52.378 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:52.378 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.378 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:52.378 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.378 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:52.378 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:24:52.378 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:24:52.378 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:24:52.378 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:24:52.378 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.378 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:52.378 [2024-11-06 14:29:19.906093] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x61500002b500:1 started. 00:24:52.378 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.378 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:52.378 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:52.378 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:24:52.378 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:24:52.378 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:52.378 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:24:52.378 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:52.378 [2024-11-06 14:29:19.913470] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x61500002b500 was disconnected and freed. delete nvme_qpair. 00:24:52.378 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.378 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:52.378 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:52.378 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:52.378 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:52.378 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.378 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:52.378 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:24:52.378 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:24:52.378 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:52.378 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:52.378 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:52.378 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:24:52.378 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:24:52.378 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:52.378 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:24:52.378 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:24:52.378 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.379 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:52.379 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:52.379 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.638 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:52.638 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:52.638 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:24:52.638 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:24:52.638 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:24:52.638 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.638 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:52.638 [2024-11-06 14:29:20.020323] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:24:52.638 [2024-11-06 14:29:20.020734] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:24:52.638 [2024-11-06 14:29:20.020798] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:24:52.638 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.638 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:52.638 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:52.638 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:24:52.638 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:24:52.638 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:52.638 [2024-11-06 14:29:20.026723] bdev_nvme.c:7308:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 00:24:52.638 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:24:52.638 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:52.638 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.638 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:52.638 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:52.638 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:52.638 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:52.638 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.638 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:52.638 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:24:52.639 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:52.639 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:52.639 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:24:52.639 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:24:52.639 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:52.639 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:24:52.639 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:52.639 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:52.639 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.639 [2024-11-06 14:29:20.087448] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:52.639 to 10.0.0.3:4421 00:24:52.639 [2024-11-06 14:29:20.087719] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:24:52.639 [2024-11-06 14:29:20.087745] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:24:52.639 [2024-11-06 14:29:20.087757] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:24:52.639 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:52.639 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:52.639 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.639 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:52.639 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:24:52.639 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:52.639 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:52.639 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:24:52.639 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:24:52.639 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:52.639 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:24:52.639 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:52.639 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.639 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:52.639 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:52.639 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:52.639 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:52.639 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.639 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:24:52.639 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:24:52.639 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:24:52.639 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:52.639 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:52.639 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:52.639 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:24:52.639 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:24:52.639 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:52.639 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:24:52.639 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:52.639 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:52.639 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.639 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:52.639 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.639 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:52.639 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:52.639 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:24:52.639 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:24:52.639 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:24:52.639 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.639 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:52.639 [2024-11-06 14:29:20.256722] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:24:52.639 [2024-11-06 14:29:20.256784] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:24:52.639 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.639 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:52.639 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:52.639 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:24:52.639 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:24:52.639 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:52.639 [2024-11-06 14:29:20.262701] bdev_nvme.c:7171:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:24:52.639 [2024-11-06 14:29:20.262746] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:24:52.639 [2024-11-06 14:29:20.262917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:52.639 [2024-11-06 14:29:20.262953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.639 [2024-11-06 14:29:20.262970] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:52.639 [2024-11-06 14:29:20.262982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.639 [2024-11-06 14:29:20.262996] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:52.639 [2024-11-06 14:29:20.263008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.639 [2024-11-06 14:29:20.263020] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:52.639 [2024-11-06 14:29:20.263032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.639 [2024-11-06 14:29:20.263045] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ad80 is same with the state(6) to be set 00:24:52.639 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:24:52.639 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:52.639 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:52.639 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.639 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:52.639 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:52.639 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:52.899 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.899 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:52.899 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:24:52.899 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:52.899 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:52.899 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:24:52.899 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:24:52.899 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:52.899 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:24:52.899 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:52.899 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:52.899 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:52.899 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:52.899 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.899 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:52.899 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.899 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:52.899 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:24:52.899 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:52.899 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:52.899 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:24:52.899 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:24:52.899 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:24:52.899 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:24:52.899 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:52.899 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:52.899 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:52.899 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.899 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:52.899 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:52.899 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.899 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4421 == \4\4\2\1 ]] 00:24:52.899 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:24:52.899 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:24:52.899 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:52.899 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:52.899 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:52.899 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:24:52.899 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:24:52.899 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:52.899 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:24:52.899 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:52.899 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:52.899 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.899 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:52.899 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.899 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:52.899 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:52.899 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:24:52.899 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:24:52.899 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:24:52.899 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.899 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:52.899 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.900 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:24:52.900 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:24:52.900 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:24:52.900 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:24:52.900 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:24:52.900 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:24:52.900 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:52.900 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.900 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:52.900 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:52.900 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:52.900 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:52.900 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.159 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:24:53.159 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:24:53.159 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:24:53.159 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:24:53.159 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:24:53.159 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:24:53.159 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:24:53.159 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:24:53.159 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:53.159 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.159 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:53.159 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:53.159 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:53.159 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:53.159 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.159 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:24:53.159 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:24:53.159 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:24:53.159 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:24:53.159 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:53.159 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:53.159 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:24:53.159 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:24:53.159 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:53.159 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:24:53.159 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:53.159 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:53.159 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.159 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:53.160 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.160 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:24:53.160 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:24:53.160 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:24:53.160 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:24:53.160 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:53.160 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.160 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:54.098 [2024-11-06 14:29:21.652954] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:24:54.098 [2024-11-06 14:29:21.653003] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:24:54.098 [2024-11-06 14:29:21.653046] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:24:54.098 [2024-11-06 14:29:21.659042] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 00:24:54.098 [2024-11-06 14:29:21.725637] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 00:24:54.098 [2024-11-06 14:29:21.727026] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x61500002c680:1 started. 00:24:54.098 [2024-11-06 14:29:21.729636] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:24:54.098 [2024-11-06 14:29:21.729694] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:24:54.358 14:29:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.358 14:29:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:54.358 14:29:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:24:54.358 [2024-11-06 14:29:21.732175] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x61500002c680 was disconnected and freed. delete nvme_qpair. 00:24:54.358 14:29:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:54.358 14:29:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:54.358 14:29:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:54.358 14:29:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:54.358 14:29:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:54.358 14:29:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:54.358 14:29:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.358 14:29:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:54.358 request: 00:24:54.358 { 00:24:54.358 "name": "nvme", 00:24:54.358 "trtype": "tcp", 00:24:54.358 "traddr": "10.0.0.3", 00:24:54.358 "adrfam": "ipv4", 00:24:54.358 "trsvcid": "8009", 00:24:54.358 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:54.358 "wait_for_attach": true, 00:24:54.358 "method": "bdev_nvme_start_discovery", 00:24:54.358 "req_id": 1 00:24:54.358 } 00:24:54.358 Got JSON-RPC error response 00:24:54.358 response: 00:24:54.358 { 00:24:54.358 "code": -17, 00:24:54.358 "message": "File exists" 00:24:54.358 } 00:24:54.358 14:29:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:54.358 14:29:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:24:54.358 14:29:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:54.358 14:29:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:54.358 14:29:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:54.358 14:29:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:24:54.358 14:29:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:54.358 14:29:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:54.358 14:29:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.358 14:29:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:54.358 14:29:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:54.358 14:29:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:54.358 14:29:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.358 14:29:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:24:54.358 14:29:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:24:54.358 14:29:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:54.358 14:29:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.358 14:29:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:54.358 14:29:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:54.358 14:29:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:54.358 14:29:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:54.358 14:29:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.358 14:29:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:54.358 14:29:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:54.358 14:29:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:24:54.358 14:29:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:54.358 14:29:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:54.358 14:29:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:54.358 14:29:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:54.358 14:29:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:54.358 14:29:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:54.358 14:29:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.358 14:29:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:54.358 request: 00:24:54.358 { 00:24:54.358 "name": "nvme_second", 00:24:54.358 "trtype": "tcp", 00:24:54.358 "traddr": "10.0.0.3", 00:24:54.358 "adrfam": "ipv4", 00:24:54.358 "trsvcid": "8009", 00:24:54.358 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:54.358 "wait_for_attach": true, 00:24:54.358 "method": "bdev_nvme_start_discovery", 00:24:54.358 "req_id": 1 00:24:54.358 } 00:24:54.358 Got JSON-RPC error response 00:24:54.358 response: 00:24:54.358 { 00:24:54.358 "code": -17, 00:24:54.358 "message": "File exists" 00:24:54.358 } 00:24:54.358 14:29:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:54.358 14:29:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:24:54.358 14:29:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:54.358 14:29:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:54.358 14:29:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:54.358 14:29:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:24:54.358 14:29:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:54.358 14:29:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:54.358 14:29:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:54.358 14:29:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:54.358 14:29:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.358 14:29:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:54.358 14:29:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.358 14:29:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:24:54.358 14:29:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:24:54.358 14:29:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:54.358 14:29:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:54.358 14:29:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:54.358 14:29:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:54.358 14:29:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.358 14:29:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:54.358 14:29:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.618 14:29:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:54.618 14:29:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:54.618 14:29:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:24:54.618 14:29:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:54.618 14:29:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:54.618 14:29:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:54.618 14:29:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:54.618 14:29:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:54.618 14:29:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:54.618 14:29:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.618 14:29:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:55.555 [2024-11-06 14:29:23.004189] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.555 [2024-11-06 14:29:23.004268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002c900 with addr=10.0.0.3, port=8010 00:24:55.555 [2024-11-06 14:29:23.004351] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:55.555 [2024-11-06 14:29:23.004366] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:55.555 [2024-11-06 14:29:23.004381] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:24:56.492 [2024-11-06 14:29:24.002609] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.492 [2024-11-06 14:29:24.002682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002cb80 with addr=10.0.0.3, port=8010 00:24:56.492 [2024-11-06 14:29:24.002754] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:56.492 [2024-11-06 14:29:24.002769] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:56.493 [2024-11-06 14:29:24.002783] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:24:57.431 [2024-11-06 14:29:25.000716] bdev_nvme.c:7427:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 00:24:57.431 request: 00:24:57.431 { 00:24:57.431 "name": "nvme_second", 00:24:57.431 "trtype": "tcp", 00:24:57.431 "traddr": "10.0.0.3", 00:24:57.431 "adrfam": "ipv4", 00:24:57.431 "trsvcid": "8010", 00:24:57.431 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:57.431 "wait_for_attach": false, 00:24:57.431 "attach_timeout_ms": 3000, 00:24:57.431 "method": "bdev_nvme_start_discovery", 00:24:57.431 "req_id": 1 00:24:57.431 } 00:24:57.431 Got JSON-RPC error response 00:24:57.431 response: 00:24:57.431 { 00:24:57.431 "code": -110, 00:24:57.431 "message": "Connection timed out" 00:24:57.431 } 00:24:57.431 14:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:57.431 14:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:24:57.431 14:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:57.431 14:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:57.431 14:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:57.431 14:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:24:57.431 14:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:57.431 14:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:57.431 14:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.431 14:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:57.431 14:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:57.431 14:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:57.431 14:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.690 14:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:24:57.690 14:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:24:57.690 14:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 82768 00:24:57.690 14:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:24:57.690 14:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:57.690 14:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:24:57.690 14:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:57.690 14:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:24:57.690 14:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:57.690 14:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:57.690 rmmod nvme_tcp 00:24:57.690 rmmod nvme_fabrics 00:24:57.690 rmmod nvme_keyring 00:24:57.690 14:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:57.690 14:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:24:57.690 14:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:24:57.690 14:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 82739 ']' 00:24:57.690 14:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 82739 00:24:57.690 14:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@952 -- # '[' -z 82739 ']' 00:24:57.690 14:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # kill -0 82739 00:24:57.691 14:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # uname 00:24:57.691 14:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:57.691 14:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 82739 00:24:57.691 killing process with pid 82739 00:24:57.691 14:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:24:57.691 14:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:24:57.691 14:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 82739' 00:24:57.691 14:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@971 -- # kill 82739 00:24:57.691 14:29:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@976 -- # wait 82739 00:24:59.069 14:29:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:59.069 14:29:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:59.069 14:29:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:59.069 14:29:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:24:59.069 14:29:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:24:59.069 14:29:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:59.069 14:29:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:24:59.069 14:29:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:59.069 14:29:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:59.069 14:29:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:59.069 14:29:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:59.069 14:29:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:59.069 14:29:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:59.069 14:29:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:59.069 14:29:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:59.069 14:29:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:59.069 14:29:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:59.069 14:29:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:59.069 14:29:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:59.069 14:29:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:59.069 14:29:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:59.069 14:29:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:59.069 14:29:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:59.069 14:29:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:59.069 14:29:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:59.069 14:29:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:59.328 14:29:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 00:24:59.328 00:24:59.328 real 0m11.476s 00:24:59.328 user 0m20.172s 00:24:59.328 sys 0m3.053s 00:24:59.328 14:29:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:59.328 ************************************ 00:24:59.328 END TEST nvmf_host_discovery 00:24:59.328 14:29:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:59.328 ************************************ 00:24:59.328 14:29:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:59.328 14:29:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:59.328 14:29:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:59.328 14:29:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.328 ************************************ 00:24:59.328 START TEST nvmf_host_multipath_status 00:24:59.328 ************************************ 00:24:59.328 14:29:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:59.328 * Looking for test storage... 00:24:59.328 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:59.328 14:29:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:59.328 14:29:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lcov --version 00:24:59.328 14:29:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:59.588 14:29:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:59.588 14:29:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:59.588 14:29:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:59.588 14:29:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:59.588 14:29:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:24:59.588 14:29:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:24:59.588 14:29:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:24:59.588 14:29:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:24:59.588 14:29:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:24:59.588 14:29:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:24:59.588 14:29:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:24:59.588 14:29:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:59.588 14:29:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:24:59.588 14:29:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:24:59.588 14:29:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:59.588 14:29:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:59.588 14:29:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:24:59.588 14:29:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:24:59.588 14:29:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:59.588 14:29:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:24:59.588 14:29:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:24:59.588 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:24:59.588 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:24:59.588 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:59.588 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:24:59.588 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:24:59.588 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:59.588 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:59.588 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:24:59.588 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:59.588 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:59.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:59.588 --rc genhtml_branch_coverage=1 00:24:59.588 --rc genhtml_function_coverage=1 00:24:59.588 --rc genhtml_legend=1 00:24:59.588 --rc geninfo_all_blocks=1 00:24:59.588 --rc geninfo_unexecuted_blocks=1 00:24:59.588 00:24:59.588 ' 00:24:59.588 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:59.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:59.588 --rc genhtml_branch_coverage=1 00:24:59.588 --rc genhtml_function_coverage=1 00:24:59.588 --rc genhtml_legend=1 00:24:59.588 --rc geninfo_all_blocks=1 00:24:59.588 --rc geninfo_unexecuted_blocks=1 00:24:59.588 00:24:59.588 ' 00:24:59.588 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:59.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:59.588 --rc genhtml_branch_coverage=1 00:24:59.588 --rc genhtml_function_coverage=1 00:24:59.588 --rc genhtml_legend=1 00:24:59.588 --rc geninfo_all_blocks=1 00:24:59.588 --rc geninfo_unexecuted_blocks=1 00:24:59.588 00:24:59.588 ' 00:24:59.588 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:59.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:59.588 --rc genhtml_branch_coverage=1 00:24:59.588 --rc genhtml_function_coverage=1 00:24:59.588 --rc genhtml_legend=1 00:24:59.589 --rc geninfo_all_blocks=1 00:24:59.589 --rc geninfo_unexecuted_blocks=1 00:24:59.589 00:24:59.589 ' 00:24:59.589 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:59.589 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:24:59.589 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:59.589 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:59.589 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:59.589 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:59.589 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:59.589 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:59.589 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:59.589 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:59.589 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:59.589 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:59.589 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:24:59.589 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:24:59.589 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:59.589 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:59.589 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:59.589 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:59.589 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:59.589 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:24:59.589 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:59.589 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:59.589 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:59.589 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.589 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.589 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.589 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:24:59.589 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.589 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:24:59.589 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:59.589 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:59.589 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:59.589 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:59.589 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:59.589 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:59.589 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:59.589 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:59.589 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:59.589 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:59.589 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:59.589 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:59.589 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:59.589 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:24:59.589 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:59.589 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:24:59.589 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:24:59.589 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:59.589 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:59.589 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:59.589 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:59.589 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:59.589 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:59.589 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:59.589 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:59.589 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:24:59.589 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:24:59.589 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:24:59.589 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:24:59.589 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:24:59.589 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@460 -- # nvmf_veth_init 00:24:59.589 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:59.589 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:59.589 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:59.589 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:59.589 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:59.589 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:59.589 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:59.589 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:59.589 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:59.589 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:59.589 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:59.589 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:59.589 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:59.589 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:59.589 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:59.589 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:59.589 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:59.589 Cannot find device "nvmf_init_br" 00:24:59.589 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:24:59.589 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:59.589 Cannot find device "nvmf_init_br2" 00:24:59.589 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:24:59.589 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:59.589 Cannot find device "nvmf_tgt_br" 00:24:59.589 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 00:24:59.589 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:59.589 Cannot find device "nvmf_tgt_br2" 00:24:59.589 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 00:24:59.590 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:59.590 Cannot find device "nvmf_init_br" 00:24:59.590 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:24:59.590 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:59.590 Cannot find device "nvmf_init_br2" 00:24:59.590 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:24:59.590 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:59.590 Cannot find device "nvmf_tgt_br" 00:24:59.590 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 00:24:59.590 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:59.590 Cannot find device "nvmf_tgt_br2" 00:24:59.590 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 00:24:59.590 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:59.849 Cannot find device "nvmf_br" 00:24:59.849 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 00:24:59.849 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:59.849 Cannot find device "nvmf_init_if" 00:24:59.849 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 00:24:59.849 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:59.849 Cannot find device "nvmf_init_if2" 00:24:59.849 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 00:24:59.849 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:59.849 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:59.849 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 00:24:59.849 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:59.849 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:59.849 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 00:24:59.849 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:59.849 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:59.849 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:59.849 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:59.849 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:59.849 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:59.849 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:59.849 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:59.849 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:59.849 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:59.849 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:24:59.849 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:24:59.849 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:24:59.849 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:24:59.849 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:24:59.849 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:24:59.849 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:24:59.849 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:59.849 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:59.849 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:59.849 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:24:59.849 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:24:59.849 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:24:59.850 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:24:59.850 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:00.109 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:00.109 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:00.109 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:25:00.109 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:25:00.109 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:25:00.109 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:00.109 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:25:00.109 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:25:00.109 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:00.109 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.096 ms 00:25:00.109 00:25:00.109 --- 10.0.0.3 ping statistics --- 00:25:00.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:00.109 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:25:00.109 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:25:00.109 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:25:00.109 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.076 ms 00:25:00.109 00:25:00.109 --- 10.0.0.4 ping statistics --- 00:25:00.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:00.109 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:25:00.109 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:00.109 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:00.109 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:25:00.109 00:25:00.109 --- 10.0.0.1 ping statistics --- 00:25:00.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:00.109 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:25:00.109 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:25:00.109 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:00.109 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:25:00.109 00:25:00.109 --- 10.0.0.2 ping statistics --- 00:25:00.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:00.109 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:25:00.109 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:00.109 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@461 -- # return 0 00:25:00.109 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:00.109 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:00.109 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:00.109 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:00.109 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:00.109 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:00.109 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:00.109 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:25:00.109 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:00.109 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:00.109 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:00.109 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=83291 00:25:00.109 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:00.109 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 83291 00:25:00.109 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 83291 ']' 00:25:00.109 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:00.109 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:00.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:00.109 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:00.109 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:00.109 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:00.109 [2024-11-06 14:29:27.715360] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:25:00.109 [2024-11-06 14:29:27.715502] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:00.368 [2024-11-06 14:29:27.902801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:00.626 [2024-11-06 14:29:28.052138] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:00.626 [2024-11-06 14:29:28.052199] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:00.626 [2024-11-06 14:29:28.052215] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:00.626 [2024-11-06 14:29:28.052236] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:00.626 [2024-11-06 14:29:28.052250] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:00.626 [2024-11-06 14:29:28.054461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:00.626 [2024-11-06 14:29:28.054531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:00.885 [2024-11-06 14:29:28.298075] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:01.144 14:29:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:01.144 14:29:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:25:01.144 14:29:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:01.144 14:29:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:01.144 14:29:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:01.144 14:29:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:01.144 14:29:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=83291 00:25:01.144 14:29:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:01.404 [2024-11-06 14:29:28.833609] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:01.404 14:29:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:01.663 Malloc0 00:25:01.663 14:29:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:25:01.928 14:29:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:02.193 14:29:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:25:02.193 [2024-11-06 14:29:29.747275] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:02.193 14:29:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:25:02.451 [2024-11-06 14:29:29.943223] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:25:02.451 14:29:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:25:02.451 14:29:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=83341 00:25:02.451 14:29:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:02.451 14:29:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 83341 /var/tmp/bdevperf.sock 00:25:02.451 14:29:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 83341 ']' 00:25:02.451 14:29:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:02.451 14:29:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:02.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:02.451 14:29:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:02.451 14:29:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:02.451 14:29:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:03.386 14:29:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:03.386 14:29:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:25:03.386 14:29:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:03.646 14:29:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:03.905 Nvme0n1 00:25:03.905 14:29:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:04.164 Nvme0n1 00:25:04.164 14:29:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:25:04.165 14:29:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:25:06.071 14:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:25:06.071 14:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:25:06.330 14:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:25:06.589 14:29:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:25:07.971 14:29:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:25:07.971 14:29:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:07.971 14:29:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:07.971 14:29:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:07.971 14:29:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:07.971 14:29:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:07.971 14:29:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:07.971 14:29:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:08.231 14:29:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:08.231 14:29:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:08.231 14:29:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:08.231 14:29:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:08.491 14:29:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:08.491 14:29:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:08.491 14:29:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:08.491 14:29:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:08.750 14:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:08.750 14:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:08.750 14:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:08.750 14:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:09.009 14:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:09.009 14:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:09.009 14:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:09.009 14:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:09.268 14:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:09.268 14:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:25:09.268 14:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:25:09.268 14:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:25:09.527 14:29:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:25:10.461 14:29:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:25:10.461 14:29:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:10.461 14:29:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:10.461 14:29:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:10.719 14:29:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:10.719 14:29:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:10.720 14:29:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:10.720 14:29:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:10.978 14:29:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:10.978 14:29:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:10.978 14:29:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:10.978 14:29:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:11.238 14:29:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:11.238 14:29:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:11.238 14:29:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:11.238 14:29:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:11.497 14:29:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:11.497 14:29:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:11.497 14:29:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:11.497 14:29:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:11.756 14:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:11.756 14:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:11.756 14:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:11.756 14:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:11.756 14:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:11.756 14:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:25:11.756 14:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:25:12.015 14:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:25:12.273 14:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:25:13.211 14:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:25:13.212 14:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:13.212 14:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:13.212 14:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:13.471 14:29:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:13.471 14:29:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:13.471 14:29:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:13.471 14:29:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:13.730 14:29:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:13.730 14:29:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:13.730 14:29:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:13.730 14:29:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:13.989 14:29:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:13.989 14:29:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:13.989 14:29:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:13.989 14:29:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:14.248 14:29:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:14.248 14:29:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:14.248 14:29:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:14.248 14:29:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:14.507 14:29:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:14.507 14:29:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:14.507 14:29:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:14.507 14:29:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:14.766 14:29:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:14.766 14:29:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:25:14.766 14:29:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:25:15.028 14:29:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:25:15.308 14:29:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:25:16.245 14:29:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:25:16.245 14:29:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:16.245 14:29:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:16.245 14:29:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:16.504 14:29:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:16.504 14:29:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:16.504 14:29:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:16.504 14:29:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:16.764 14:29:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:16.764 14:29:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:16.764 14:29:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:16.764 14:29:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:16.764 14:29:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:16.764 14:29:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:16.764 14:29:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:16.764 14:29:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:17.023 14:29:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:17.023 14:29:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:17.023 14:29:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:17.023 14:29:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:17.282 14:29:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:17.282 14:29:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:17.282 14:29:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:17.282 14:29:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:17.541 14:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:17.541 14:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:25:17.541 14:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:25:17.800 14:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:25:18.059 14:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:25:18.996 14:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:25:18.996 14:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:18.996 14:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:18.996 14:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:19.255 14:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:19.255 14:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:19.255 14:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:19.255 14:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:19.514 14:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:19.514 14:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:19.514 14:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:19.514 14:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:19.775 14:29:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:19.775 14:29:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:19.775 14:29:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:19.775 14:29:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:19.775 14:29:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:19.775 14:29:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:19.775 14:29:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:19.775 14:29:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:20.034 14:29:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:20.034 14:29:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:20.034 14:29:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:20.034 14:29:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:20.293 14:29:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:20.293 14:29:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:25:20.293 14:29:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:25:20.553 14:29:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:25:20.811 14:29:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:25:21.744 14:29:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:25:21.744 14:29:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:21.744 14:29:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:21.744 14:29:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:22.003 14:29:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:22.003 14:29:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:22.003 14:29:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:22.003 14:29:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:22.261 14:29:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:22.261 14:29:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:22.261 14:29:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:22.261 14:29:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:22.521 14:29:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:22.521 14:29:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:22.521 14:29:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:22.521 14:29:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:22.521 14:29:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:22.521 14:29:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:22.779 14:29:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:22.779 14:29:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:22.779 14:29:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:22.779 14:29:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:22.779 14:29:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:22.779 14:29:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:23.039 14:29:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:23.039 14:29:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:25:23.298 14:29:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:25:23.298 14:29:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:25:23.559 14:29:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:25:23.818 14:29:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:25:24.796 14:29:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:25:24.796 14:29:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:24.796 14:29:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:24.796 14:29:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:25.055 14:29:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:25.055 14:29:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:25.055 14:29:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:25.055 14:29:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:25.055 14:29:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:25.055 14:29:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:25.055 14:29:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:25.055 14:29:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:25.314 14:29:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:25.314 14:29:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:25.314 14:29:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:25.314 14:29:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:25.574 14:29:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:25.574 14:29:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:25.574 14:29:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:25.574 14:29:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:25.834 14:29:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:25.834 14:29:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:25.834 14:29:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:25.834 14:29:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:26.093 14:29:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:26.093 14:29:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:25:26.093 14:29:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:25:26.353 14:29:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:25:26.612 14:29:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:25:27.550 14:29:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:25:27.550 14:29:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:27.550 14:29:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:27.550 14:29:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:27.809 14:29:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:27.809 14:29:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:27.809 14:29:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:27.809 14:29:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:28.069 14:29:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:28.069 14:29:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:28.069 14:29:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:28.069 14:29:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:28.328 14:29:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:28.328 14:29:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:28.328 14:29:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:28.328 14:29:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:28.328 14:29:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:28.328 14:29:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:28.328 14:29:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:28.329 14:29:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:28.588 14:29:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:28.588 14:29:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:28.588 14:29:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:28.588 14:29:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:28.848 14:29:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:28.848 14:29:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:25:28.848 14:29:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:25:29.107 14:29:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:25:29.367 14:29:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:25:30.302 14:29:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:25:30.302 14:29:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:30.302 14:29:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:30.302 14:29:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:30.560 14:29:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:30.560 14:29:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:30.560 14:29:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:30.560 14:29:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:30.818 14:29:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:30.818 14:29:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:30.818 14:29:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:30.818 14:29:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:30.818 14:29:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:30.818 14:29:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:30.818 14:29:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:30.819 14:29:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:31.078 14:29:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:31.078 14:29:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:31.078 14:29:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:31.078 14:29:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:31.338 14:29:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:31.338 14:29:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:31.338 14:29:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:31.338 14:29:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:31.597 14:29:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:31.597 14:29:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:25:31.597 14:29:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:25:31.856 14:29:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:25:32.120 14:29:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:25:33.059 14:30:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:25:33.059 14:30:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:33.059 14:30:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:33.059 14:30:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:33.318 14:30:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:33.318 14:30:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:33.318 14:30:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:33.318 14:30:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:33.578 14:30:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:33.578 14:30:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:33.578 14:30:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:33.578 14:30:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:33.578 14:30:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:33.578 14:30:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:33.578 14:30:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:33.578 14:30:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:33.853 14:30:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:33.853 14:30:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:33.853 14:30:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:33.853 14:30:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:34.112 14:30:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:34.112 14:30:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:34.112 14:30:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:34.112 14:30:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:34.372 14:30:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:34.372 14:30:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 83341 00:25:34.372 14:30:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 83341 ']' 00:25:34.372 14:30:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 83341 00:25:34.372 14:30:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:25:34.372 14:30:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:34.372 14:30:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 83341 00:25:34.372 killing process with pid 83341 00:25:34.372 14:30:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:25:34.372 14:30:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:25:34.372 14:30:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 83341' 00:25:34.372 14:30:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 83341 00:25:34.372 14:30:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 83341 00:25:34.372 { 00:25:34.372 "results": [ 00:25:34.372 { 00:25:34.372 "job": "Nvme0n1", 00:25:34.372 "core_mask": "0x4", 00:25:34.372 "workload": "verify", 00:25:34.372 "status": "terminated", 00:25:34.372 "verify_range": { 00:25:34.372 "start": 0, 00:25:34.372 "length": 16384 00:25:34.372 }, 00:25:34.372 "queue_depth": 128, 00:25:34.372 "io_size": 4096, 00:25:34.372 "runtime": 30.190895, 00:25:34.372 "iops": 7555.5229482266095, 00:25:34.372 "mibps": 29.513761516510193, 00:25:34.372 "io_failed": 0, 00:25:34.372 "io_timeout": 0, 00:25:34.372 "avg_latency_us": 16915.286019030093, 00:25:34.372 "min_latency_us": 194.10763052208836, 00:25:34.372 "max_latency_us": 4042702.6506024096 00:25:34.372 } 00:25:34.372 ], 00:25:34.372 "core_count": 1 00:25:34.372 } 00:25:35.753 14:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 83341 00:25:35.753 14:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:25:35.753 [2024-11-06 14:29:30.044723] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:25:35.753 [2024-11-06 14:29:30.044869] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83341 ] 00:25:35.753 [2024-11-06 14:29:30.225411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:35.753 [2024-11-06 14:29:30.377048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:35.753 [2024-11-06 14:29:30.622800] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:35.753 Running I/O for 90 seconds... 00:25:35.753 7937.00 IOPS, 31.00 MiB/s [2024-11-06T14:30:03.388Z] 8420.50 IOPS, 32.89 MiB/s [2024-11-06T14:30:03.388Z] 8600.33 IOPS, 33.60 MiB/s [2024-11-06T14:30:03.388Z] 8736.25 IOPS, 34.13 MiB/s [2024-11-06T14:30:03.388Z] 8782.40 IOPS, 34.31 MiB/s [2024-11-06T14:30:03.388Z] 8977.17 IOPS, 35.07 MiB/s [2024-11-06T14:30:03.388Z] 9116.57 IOPS, 35.61 MiB/s [2024-11-06T14:30:03.388Z] 9097.00 IOPS, 35.54 MiB/s [2024-11-06T14:30:03.388Z] 9059.67 IOPS, 35.39 MiB/s [2024-11-06T14:30:03.388Z] 9029.30 IOPS, 35.27 MiB/s [2024-11-06T14:30:03.388Z] 9005.18 IOPS, 35.18 MiB/s [2024-11-06T14:30:03.388Z] 8982.75 IOPS, 35.09 MiB/s [2024-11-06T14:30:03.388Z] 8972.38 IOPS, 35.05 MiB/s [2024-11-06T14:30:03.388Z] [2024-11-06 14:29:45.272847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:52552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.753 [2024-11-06 14:29:45.272953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:35.753 [2024-11-06 14:29:45.273026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:52560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.754 [2024-11-06 14:29:45.273049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:35.754 [2024-11-06 14:29:45.273074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:52568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.754 [2024-11-06 14:29:45.273093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:35.754 [2024-11-06 14:29:45.273117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:52576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.754 [2024-11-06 14:29:45.273135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:35.754 [2024-11-06 14:29:45.273159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:52584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.754 [2024-11-06 14:29:45.273177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:35.754 [2024-11-06 14:29:45.273201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:52592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.754 [2024-11-06 14:29:45.273219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:35.754 [2024-11-06 14:29:45.273243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:52600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.754 [2024-11-06 14:29:45.273260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:35.754 [2024-11-06 14:29:45.273285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:52608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.754 [2024-11-06 14:29:45.273304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:35.754 [2024-11-06 14:29:45.273514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:52616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.754 [2024-11-06 14:29:45.273541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:35.754 [2024-11-06 14:29:45.273594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:52624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.754 [2024-11-06 14:29:45.273612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:35.754 [2024-11-06 14:29:45.273637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:52632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.754 [2024-11-06 14:29:45.273654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:35.754 [2024-11-06 14:29:45.273681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:52640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.754 [2024-11-06 14:29:45.273698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:35.754 [2024-11-06 14:29:45.273723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:52104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.754 [2024-11-06 14:29:45.273740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:35.754 [2024-11-06 14:29:45.273766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:52112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.754 [2024-11-06 14:29:45.273784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:35.754 [2024-11-06 14:29:45.273809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:52120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.754 [2024-11-06 14:29:45.273827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:35.754 [2024-11-06 14:29:45.273867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:52128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.754 [2024-11-06 14:29:45.273884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:35.754 [2024-11-06 14:29:45.273909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:52136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.754 [2024-11-06 14:29:45.273927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:35.754 [2024-11-06 14:29:45.273953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:52144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.754 [2024-11-06 14:29:45.273970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:35.754 [2024-11-06 14:29:45.273995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:52152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.754 [2024-11-06 14:29:45.274012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:35.754 [2024-11-06 14:29:45.274038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:52160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.754 [2024-11-06 14:29:45.274055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:35.754 [2024-11-06 14:29:45.274079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:52648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.754 [2024-11-06 14:29:45.274096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:35.754 [2024-11-06 14:29:45.274131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:52656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.754 [2024-11-06 14:29:45.274149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:35.754 [2024-11-06 14:29:45.274174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:52664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.754 [2024-11-06 14:29:45.274192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:35.754 [2024-11-06 14:29:45.274217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:52672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.754 [2024-11-06 14:29:45.274235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:35.754 [2024-11-06 14:29:45.274367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:52680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.754 [2024-11-06 14:29:45.274388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:35.754 [2024-11-06 14:29:45.274416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:52688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.754 [2024-11-06 14:29:45.274434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:35.754 [2024-11-06 14:29:45.274460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:52696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.754 [2024-11-06 14:29:45.274487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:35.754 [2024-11-06 14:29:45.274514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:52704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.754 [2024-11-06 14:29:45.274531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:35.754 [2024-11-06 14:29:45.274557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:52712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.754 [2024-11-06 14:29:45.274575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:35.754 [2024-11-06 14:29:45.274600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:52720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.754 [2024-11-06 14:29:45.274618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:35.754 [2024-11-06 14:29:45.274644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:52728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.754 [2024-11-06 14:29:45.274662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:35.754 [2024-11-06 14:29:45.274704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:52736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.754 [2024-11-06 14:29:45.274722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:35.754 [2024-11-06 14:29:45.274748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:52744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.754 [2024-11-06 14:29:45.274766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:35.754 [2024-11-06 14:29:45.274792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:52752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.754 [2024-11-06 14:29:45.274818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:35.754 [2024-11-06 14:29:45.274858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:52760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.754 [2024-11-06 14:29:45.274877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:35.754 [2024-11-06 14:29:45.274904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:52768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.754 [2024-11-06 14:29:45.274921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:35.754 [2024-11-06 14:29:45.274948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:52776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.754 [2024-11-06 14:29:45.274965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:35.754 [2024-11-06 14:29:45.274990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:52784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.754 [2024-11-06 14:29:45.275007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:35.754 [2024-11-06 14:29:45.275033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:52168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.754 [2024-11-06 14:29:45.275050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:35.754 [2024-11-06 14:29:45.275075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:52176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.754 [2024-11-06 14:29:45.275092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:35.755 [2024-11-06 14:29:45.275118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:52184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.755 [2024-11-06 14:29:45.275135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:35.755 [2024-11-06 14:29:45.275161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:52192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.755 [2024-11-06 14:29:45.275179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:35.755 [2024-11-06 14:29:45.275204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:52200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.755 [2024-11-06 14:29:45.275221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:35.755 [2024-11-06 14:29:45.275247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:52208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.755 [2024-11-06 14:29:45.275263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:35.755 [2024-11-06 14:29:45.275288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:52216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.755 [2024-11-06 14:29:45.275305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:35.755 [2024-11-06 14:29:45.275330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:52224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.755 [2024-11-06 14:29:45.275354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:35.755 [2024-11-06 14:29:45.275380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:52792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.755 [2024-11-06 14:29:45.275398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:35.755 [2024-11-06 14:29:45.275424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:52800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.755 [2024-11-06 14:29:45.275442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:35.755 [2024-11-06 14:29:45.275483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:52808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.755 [2024-11-06 14:29:45.275502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:35.755 [2024-11-06 14:29:45.275529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:52816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.755 [2024-11-06 14:29:45.275546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:35.755 [2024-11-06 14:29:45.275572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:52824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.755 [2024-11-06 14:29:45.275589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:35.755 [2024-11-06 14:29:45.275616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:52832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.755 [2024-11-06 14:29:45.275633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:35.755 [2024-11-06 14:29:45.275658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:52840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.755 [2024-11-06 14:29:45.275675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:35.755 [2024-11-06 14:29:45.275701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:52848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.755 [2024-11-06 14:29:45.275717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:35.755 [2024-11-06 14:29:45.275743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:52856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.755 [2024-11-06 14:29:45.275760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:35.755 [2024-11-06 14:29:45.275786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:52864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.755 [2024-11-06 14:29:45.275803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:35.755 [2024-11-06 14:29:45.275831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:52872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.755 [2024-11-06 14:29:45.275860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:35.755 [2024-11-06 14:29:45.275886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:52880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.755 [2024-11-06 14:29:45.275904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:35.755 [2024-11-06 14:29:45.275937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:52888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.755 [2024-11-06 14:29:45.275954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:35.755 [2024-11-06 14:29:45.275980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:52896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.755 [2024-11-06 14:29:45.275998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:35.755 [2024-11-06 14:29:45.276024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:52904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.755 [2024-11-06 14:29:45.276041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:35.755 [2024-11-06 14:29:45.276066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:52912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.755 [2024-11-06 14:29:45.276083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:35.755 [2024-11-06 14:29:45.276109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:52920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.755 [2024-11-06 14:29:45.276126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:35.755 [2024-11-06 14:29:45.276152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:52928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.755 [2024-11-06 14:29:45.276169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:35.755 [2024-11-06 14:29:45.276193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:52232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.755 [2024-11-06 14:29:45.276211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:35.755 [2024-11-06 14:29:45.276237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:52240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.755 [2024-11-06 14:29:45.276254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:35.755 [2024-11-06 14:29:45.276279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:52248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.755 [2024-11-06 14:29:45.276296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:35.755 [2024-11-06 14:29:45.276321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:52256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.755 [2024-11-06 14:29:45.276339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:35.755 [2024-11-06 14:29:45.276365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:52264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.755 [2024-11-06 14:29:45.276382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:35.755 [2024-11-06 14:29:45.276407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:52272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.755 [2024-11-06 14:29:45.276424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:35.755 [2024-11-06 14:29:45.276455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:52280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.755 [2024-11-06 14:29:45.276472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:35.755 [2024-11-06 14:29:45.276497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:52288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.755 [2024-11-06 14:29:45.276514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:35.755 [2024-11-06 14:29:45.276539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:52296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.755 [2024-11-06 14:29:45.276555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:35.755 [2024-11-06 14:29:45.276580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:52304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.755 [2024-11-06 14:29:45.276597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:35.755 [2024-11-06 14:29:45.276622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:52312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.755 [2024-11-06 14:29:45.276639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:35.755 [2024-11-06 14:29:45.276663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:52320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.755 [2024-11-06 14:29:45.276681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:35.756 [2024-11-06 14:29:45.276706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:52328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.756 [2024-11-06 14:29:45.276723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:35.756 [2024-11-06 14:29:45.276749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:52336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.756 [2024-11-06 14:29:45.276767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:35.756 [2024-11-06 14:29:45.276792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:52344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.756 [2024-11-06 14:29:45.276809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:35.756 [2024-11-06 14:29:45.276855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:52352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.756 [2024-11-06 14:29:45.276874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:35.756 [2024-11-06 14:29:45.276922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:52936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.756 [2024-11-06 14:29:45.276941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:35.756 [2024-11-06 14:29:45.276966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:52944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.756 [2024-11-06 14:29:45.276983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:35.756 [2024-11-06 14:29:45.277010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:52952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.756 [2024-11-06 14:29:45.277036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:35.756 [2024-11-06 14:29:45.277062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:52960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.756 [2024-11-06 14:29:45.277079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:35.756 [2024-11-06 14:29:45.277104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:52968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.756 [2024-11-06 14:29:45.277121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:35.756 [2024-11-06 14:29:45.277146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:52976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.756 [2024-11-06 14:29:45.277164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:35.756 [2024-11-06 14:29:45.277188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:52984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.756 [2024-11-06 14:29:45.277205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:35.756 [2024-11-06 14:29:45.277230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:52992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.756 [2024-11-06 14:29:45.277247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:35.756 [2024-11-06 14:29:45.277272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:53000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.756 [2024-11-06 14:29:45.277289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:35.756 [2024-11-06 14:29:45.277315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:53008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.756 [2024-11-06 14:29:45.277331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:35.756 [2024-11-06 14:29:45.277356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:53016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.756 [2024-11-06 14:29:45.277374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:35.756 [2024-11-06 14:29:45.277399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:53024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.756 [2024-11-06 14:29:45.277416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:35.756 [2024-11-06 14:29:45.277442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:53032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.756 [2024-11-06 14:29:45.277459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:35.756 [2024-11-06 14:29:45.277485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:53040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.756 [2024-11-06 14:29:45.277503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:35.756 [2024-11-06 14:29:45.277527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:53048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.756 [2024-11-06 14:29:45.277551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:35.756 [2024-11-06 14:29:45.277591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:52360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.756 [2024-11-06 14:29:45.277609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:35.756 [2024-11-06 14:29:45.277634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:52368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.756 [2024-11-06 14:29:45.277651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:35.756 [2024-11-06 14:29:45.277677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:52376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.756 [2024-11-06 14:29:45.277695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:35.756 [2024-11-06 14:29:45.277720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:52384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.756 [2024-11-06 14:29:45.277737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:35.756 [2024-11-06 14:29:45.277764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:52392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.756 [2024-11-06 14:29:45.277781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:35.756 [2024-11-06 14:29:45.277806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:52400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.756 [2024-11-06 14:29:45.277824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:35.756 [2024-11-06 14:29:45.277861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:52408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.756 [2024-11-06 14:29:45.277880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:35.756 [2024-11-06 14:29:45.277905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:52416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.756 [2024-11-06 14:29:45.277922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:35.756 [2024-11-06 14:29:45.277948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:52424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.756 [2024-11-06 14:29:45.277965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:35.756 [2024-11-06 14:29:45.277991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:52432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.756 [2024-11-06 14:29:45.278008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:35.756 [2024-11-06 14:29:45.278033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:52440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.756 [2024-11-06 14:29:45.278050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:35.756 [2024-11-06 14:29:45.278076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:52448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.756 [2024-11-06 14:29:45.278094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:35.756 [2024-11-06 14:29:45.278127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:52456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.756 [2024-11-06 14:29:45.278145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:35.756 [2024-11-06 14:29:45.278170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:52464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.756 [2024-11-06 14:29:45.278187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:35.756 [2024-11-06 14:29:45.278212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:52472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.756 [2024-11-06 14:29:45.278230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:35.756 [2024-11-06 14:29:45.278255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:52480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.756 [2024-11-06 14:29:45.278272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:35.756 [2024-11-06 14:29:45.278297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:53056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.756 [2024-11-06 14:29:45.278315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:35.757 [2024-11-06 14:29:45.278344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:53064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.757 [2024-11-06 14:29:45.278362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:35.757 [2024-11-06 14:29:45.278387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:53072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.757 [2024-11-06 14:29:45.278404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:35.757 [2024-11-06 14:29:45.278433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:53080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.757 [2024-11-06 14:29:45.278451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:35.757 [2024-11-06 14:29:45.278484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:53088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.757 [2024-11-06 14:29:45.278502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:35.757 [2024-11-06 14:29:45.278527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:53096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.757 [2024-11-06 14:29:45.278545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:35.757 [2024-11-06 14:29:45.278570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:53104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.757 [2024-11-06 14:29:45.278588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:35.757 [2024-11-06 14:29:45.278613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:53112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.757 [2024-11-06 14:29:45.278631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:35.757 [2024-11-06 14:29:45.278662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:53120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.757 [2024-11-06 14:29:45.278680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:35.757 [2024-11-06 14:29:45.278705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:52488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.757 [2024-11-06 14:29:45.278722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:35.757 [2024-11-06 14:29:45.278747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:52496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.757 [2024-11-06 14:29:45.278764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:35.757 [2024-11-06 14:29:45.278789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:52504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.757 [2024-11-06 14:29:45.278806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:35.757 [2024-11-06 14:29:45.278831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:52512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.757 [2024-11-06 14:29:45.278859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:35.757 [2024-11-06 14:29:45.278885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:52520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.757 [2024-11-06 14:29:45.278902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:35.757 [2024-11-06 14:29:45.278927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:52528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.757 [2024-11-06 14:29:45.278945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:35.757 [2024-11-06 14:29:45.278969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:52536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.757 [2024-11-06 14:29:45.278987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:35.757 [2024-11-06 14:29:45.279011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:52544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.757 [2024-11-06 14:29:45.279029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:35.757 8657.21 IOPS, 33.82 MiB/s [2024-11-06T14:30:03.392Z] 8080.07 IOPS, 31.56 MiB/s [2024-11-06T14:30:03.392Z] 7575.06 IOPS, 29.59 MiB/s [2024-11-06T14:30:03.392Z] 7129.47 IOPS, 27.85 MiB/s [2024-11-06T14:30:03.392Z] 6951.11 IOPS, 27.15 MiB/s [2024-11-06T14:30:03.392Z] 7013.00 IOPS, 27.39 MiB/s [2024-11-06T14:30:03.392Z] 7075.30 IOPS, 27.64 MiB/s [2024-11-06T14:30:03.392Z] 7137.05 IOPS, 27.88 MiB/s [2024-11-06T14:30:03.392Z] 7197.05 IOPS, 28.11 MiB/s [2024-11-06T14:30:03.392Z] 7247.17 IOPS, 28.31 MiB/s [2024-11-06T14:30:03.392Z] 7293.21 IOPS, 28.49 MiB/s [2024-11-06T14:30:03.392Z] 7333.64 IOPS, 28.65 MiB/s [2024-11-06T14:30:03.392Z] 7382.50 IOPS, 28.84 MiB/s [2024-11-06T14:30:03.392Z] 7431.30 IOPS, 29.03 MiB/s [2024-11-06T14:30:03.392Z] [2024-11-06 14:29:59.493112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:84744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.757 [2024-11-06 14:29:59.493197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:35.757 [2024-11-06 14:29:59.493267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:84760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.757 [2024-11-06 14:29:59.493288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:35.757 [2024-11-06 14:29:59.493340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:84312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.757 [2024-11-06 14:29:59.493360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:35.757 [2024-11-06 14:29:59.493385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:84344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.757 [2024-11-06 14:29:59.493404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:35.757 [2024-11-06 14:29:59.493428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:84376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.757 [2024-11-06 14:29:59.493446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:35.757 [2024-11-06 14:29:59.493471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:84408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.757 [2024-11-06 14:29:59.493489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:35.757 [2024-11-06 14:29:59.493514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:84440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.757 [2024-11-06 14:29:59.493532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:35.757 [2024-11-06 14:29:59.493557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:84472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.757 [2024-11-06 14:29:59.493575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:35.757 [2024-11-06 14:29:59.493599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:84504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.757 [2024-11-06 14:29:59.493615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:35.757 [2024-11-06 14:29:59.493640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:84544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.757 [2024-11-06 14:29:59.493657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:35.757 [2024-11-06 14:29:59.493681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:84784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.757 [2024-11-06 14:29:59.493698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:35.757 [2024-11-06 14:29:59.493723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:84800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.757 [2024-11-06 14:29:59.493741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:35.757 [2024-11-06 14:29:59.493766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:84128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.757 [2024-11-06 14:29:59.493783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:35.757 [2024-11-06 14:29:59.493807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:84160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.757 [2024-11-06 14:29:59.493826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:35.757 [2024-11-06 14:29:59.493873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:84192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.757 [2024-11-06 14:29:59.493908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:35.757 [2024-11-06 14:29:59.493935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:84224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.758 [2024-11-06 14:29:59.493953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:35.758 [2024-11-06 14:29:59.493980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:84816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.758 [2024-11-06 14:29:59.493998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:35.758 [2024-11-06 14:29:59.494024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:84832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.758 [2024-11-06 14:29:59.494044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:35.758 [2024-11-06 14:29:59.494072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:84848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.758 [2024-11-06 14:29:59.494091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:35.758 [2024-11-06 14:29:59.494116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:84864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.758 [2024-11-06 14:29:59.494136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:35.758 [2024-11-06 14:29:59.494182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:84880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.758 [2024-11-06 14:29:59.494204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:35.758 [2024-11-06 14:29:59.494230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:84896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.758 [2024-11-06 14:29:59.494249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:35.758 [2024-11-06 14:29:59.494276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:84248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.758 [2024-11-06 14:29:59.494294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:35.758 [2024-11-06 14:29:59.494321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:84280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.758 [2024-11-06 14:29:59.494342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:35.758 [2024-11-06 14:29:59.494367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:84904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.758 [2024-11-06 14:29:59.494385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:35.758 [2024-11-06 14:29:59.494412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.758 [2024-11-06 14:29:59.494431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:35.758 [2024-11-06 14:29:59.494456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:84936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.758 [2024-11-06 14:29:59.494494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:35.758 [2024-11-06 14:29:59.494539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:84952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.758 [2024-11-06 14:29:59.494559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:35.758 [2024-11-06 14:29:59.494586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:84560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.758 [2024-11-06 14:29:59.494605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:35.758 [2024-11-06 14:29:59.494633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:84592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.758 [2024-11-06 14:29:59.494654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:35.758 [2024-11-06 14:29:59.494682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:84624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.758 [2024-11-06 14:29:59.494701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:35.758 [2024-11-06 14:29:59.494729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.758 [2024-11-06 14:29:59.494749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:35.758 [2024-11-06 14:29:59.494777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:84352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.758 [2024-11-06 14:29:59.494796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:35.758 [2024-11-06 14:29:59.494823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:84672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.758 [2024-11-06 14:29:59.494842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:35.758 [2024-11-06 14:29:59.494884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:84704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.758 [2024-11-06 14:29:59.494905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:35.758 [2024-11-06 14:29:59.494934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:84736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.758 [2024-11-06 14:29:59.494953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:35.758 [2024-11-06 14:29:59.494980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:84768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.758 [2024-11-06 14:29:59.495001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:35.758 [2024-11-06 14:29:59.495029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:84792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.758 [2024-11-06 14:29:59.495048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:35.758 [2024-11-06 14:29:59.495076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:84976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.758 [2024-11-06 14:29:59.495105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:35.758 [2024-11-06 14:29:59.495134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:84992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.758 [2024-11-06 14:29:59.495153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:35.758 [2024-11-06 14:29:59.495185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:85008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.758 [2024-11-06 14:29:59.495205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:35.758 [2024-11-06 14:29:59.495231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:85024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.758 [2024-11-06 14:29:59.495251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:35.758 [2024-11-06 14:29:59.495278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:85040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.758 [2024-11-06 14:29:59.495297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:35.758 [2024-11-06 14:29:59.495324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:84384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.758 [2024-11-06 14:29:59.495343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:35.759 [2024-11-06 14:29:59.495371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:84416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.759 [2024-11-06 14:29:59.495391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:35.759 [2024-11-06 14:29:59.495417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:84448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.759 [2024-11-06 14:29:59.495436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:35.759 [2024-11-06 14:29:59.495463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:84480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.759 [2024-11-06 14:29:59.495483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:35.759 [2024-11-06 14:29:59.495513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:84512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.759 [2024-11-06 14:29:59.495532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:35.759 [2024-11-06 14:29:59.495560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:84536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.759 [2024-11-06 14:29:59.495579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:35.759 [2024-11-06 14:29:59.495608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:84568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.759 [2024-11-06 14:29:59.495640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:35.759 [2024-11-06 14:29:59.495666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:85048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.759 [2024-11-06 14:29:59.495685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:35.759 [2024-11-06 14:29:59.495722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:85064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.759 [2024-11-06 14:29:59.495740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:35.759 [2024-11-06 14:29:59.495767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:85080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.759 [2024-11-06 14:29:59.495786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:35.759 [2024-11-06 14:29:59.495811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:85096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.759 [2024-11-06 14:29:59.495829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:35.759 [2024-11-06 14:29:59.495865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:85112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.759 [2024-11-06 14:29:59.495885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:35.759 [2024-11-06 14:29:59.495930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:85128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.759 [2024-11-06 14:29:59.495950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:35.759 [2024-11-06 14:29:59.495976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:85144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.759 [2024-11-06 14:29:59.495996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:35.759 [2024-11-06 14:29:59.496022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:85160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.759 [2024-11-06 14:29:59.496040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:35.759 [2024-11-06 14:29:59.496065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:85176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.759 [2024-11-06 14:29:59.496084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:35.759 [2024-11-06 14:29:59.496110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:85192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.759 [2024-11-06 14:29:59.496129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:35.759 [2024-11-06 14:29:59.496153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:84824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.759 [2024-11-06 14:29:59.496171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:35.759 [2024-11-06 14:29:59.496197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:84856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.759 [2024-11-06 14:29:59.496216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:35.759 [2024-11-06 14:29:59.496241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:84888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.759 [2024-11-06 14:29:59.496259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:35.759 [2024-11-06 14:29:59.496293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:84928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.759 [2024-11-06 14:29:59.496311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:35.759 [2024-11-06 14:29:59.496336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:84960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.759 [2024-11-06 14:29:59.496356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:35.759 [2024-11-06 14:29:59.498267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:85208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.759 [2024-11-06 14:29:59.498320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:35.759 [2024-11-06 14:29:59.498358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:85224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.759 [2024-11-06 14:29:59.498378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:35.759 [2024-11-06 14:29:59.498405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:85240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.759 [2024-11-06 14:29:59.498425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:35.759 [2024-11-06 14:29:59.498451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:85256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.759 [2024-11-06 14:29:59.498471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:35.759 [2024-11-06 14:29:59.498508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:85272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.759 [2024-11-06 14:29:59.498544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:35.759 [2024-11-06 14:29:59.498573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:85288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.759 [2024-11-06 14:29:59.498593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:35.759 [2024-11-06 14:29:59.498620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:85296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.759 [2024-11-06 14:29:59.498642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:35.759 [2024-11-06 14:29:59.498670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:85312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.759 [2024-11-06 14:29:59.498690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:35.759 [2024-11-06 14:29:59.498718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:85328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.759 [2024-11-06 14:29:59.498739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:35.759 [2024-11-06 14:29:59.498767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:85344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.759 [2024-11-06 14:29:59.498788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:35.759 [2024-11-06 14:29:59.498815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:85360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.759 [2024-11-06 14:29:59.498848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:35.759 [2024-11-06 14:29:59.498888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:85376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.759 [2024-11-06 14:29:59.498910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:35.759 [2024-11-06 14:29:59.498939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:84632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.759 [2024-11-06 14:29:59.498958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:35.760 [2024-11-06 14:29:59.498986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:84664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.760 [2024-11-06 14:29:59.499021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:35.760 [2024-11-06 14:29:59.499048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:84696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.760 [2024-11-06 14:29:59.499068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:35.760 [2024-11-06 14:29:59.499095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:84728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.760 [2024-11-06 14:29:59.499115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:35.760 [2024-11-06 14:29:59.499143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:85392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.760 [2024-11-06 14:29:59.499162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:35.760 [2024-11-06 14:29:59.499189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:35.760 [2024-11-06 14:29:59.499210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:35.760 7471.57 IOPS, 29.19 MiB/s [2024-11-06T14:30:03.395Z] 7510.76 IOPS, 29.34 MiB/s [2024-11-06T14:30:03.395Z] 7549.20 IOPS, 29.49 MiB/s [2024-11-06T14:30:03.395Z] Received shutdown signal, test time was about 30.191608 seconds 00:25:35.760 00:25:35.760 Latency(us) 00:25:35.760 [2024-11-06T14:30:03.395Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:35.760 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:35.760 Verification LBA range: start 0x0 length 0x4000 00:25:35.760 Nvme0n1 : 30.19 7555.52 29.51 0.00 0.00 16915.29 194.11 4042702.65 00:25:35.760 [2024-11-06T14:30:03.395Z] =================================================================================================================== 00:25:35.760 [2024-11-06T14:30:03.395Z] Total : 7555.52 29.51 0.00 0.00 16915.29 194.11 4042702.65 00:25:35.760 14:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:35.760 14:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:25:35.760 14:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:25:35.760 14:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:25:35.760 14:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:35.760 14:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:25:36.019 14:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:36.019 14:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:25:36.019 14:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:36.019 14:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:36.019 rmmod nvme_tcp 00:25:36.019 rmmod nvme_fabrics 00:25:36.019 rmmod nvme_keyring 00:25:36.019 14:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:36.019 14:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:25:36.019 14:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:25:36.019 14:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 83291 ']' 00:25:36.019 14:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 83291 00:25:36.019 14:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 83291 ']' 00:25:36.019 14:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 83291 00:25:36.019 14:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:25:36.019 14:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:36.019 14:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 83291 00:25:36.019 14:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:36.019 14:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:36.019 14:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 83291' 00:25:36.019 killing process with pid 83291 00:25:36.019 14:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 83291 00:25:36.019 14:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 83291 00:25:37.923 14:30:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:37.923 14:30:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:37.923 14:30:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:37.923 14:30:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:25:37.923 14:30:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:25:37.923 14:30:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:37.923 14:30:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:25:37.923 14:30:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:37.923 14:30:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:25:37.923 14:30:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:25:37.923 14:30:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:25:37.923 14:30:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:25:37.923 14:30:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:25:37.923 14:30:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:25:37.923 14:30:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:25:37.923 14:30:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:25:37.923 14:30:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:25:37.923 14:30:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:25:37.923 14:30:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:25:37.923 14:30:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:25:37.923 14:30:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:37.923 14:30:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:37.923 14:30:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 00:25:37.923 14:30:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:37.923 14:30:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:37.923 14:30:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:37.923 14:30:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 00:25:37.923 00:25:37.923 real 0m38.595s 00:25:37.923 user 1m56.736s 00:25:37.923 sys 0m12.820s 00:25:37.923 14:30:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:37.923 14:30:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:37.923 ************************************ 00:25:37.923 END TEST nvmf_host_multipath_status 00:25:37.923 ************************************ 00:25:37.923 14:30:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:37.923 14:30:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:25:37.923 14:30:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:37.923 14:30:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.923 ************************************ 00:25:37.923 START TEST nvmf_discovery_remove_ifc 00:25:37.923 ************************************ 00:25:37.923 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:38.184 * Looking for test storage... 00:25:38.184 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:38.184 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:38.184 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lcov --version 00:25:38.184 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:38.184 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:38.184 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:38.184 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:38.184 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:38.184 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:25:38.184 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:25:38.184 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:25:38.184 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:25:38.184 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:25:38.184 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:25:38.184 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:25:38.184 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:38.184 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:25:38.184 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:25:38.184 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:38.184 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:38.184 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:25:38.184 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:25:38.184 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:38.184 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:25:38.184 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:25:38.184 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:25:38.184 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:25:38.184 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:38.184 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:25:38.184 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:25:38.184 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:38.184 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:38.184 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:25:38.184 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:38.185 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:38.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:38.185 --rc genhtml_branch_coverage=1 00:25:38.185 --rc genhtml_function_coverage=1 00:25:38.185 --rc genhtml_legend=1 00:25:38.185 --rc geninfo_all_blocks=1 00:25:38.185 --rc geninfo_unexecuted_blocks=1 00:25:38.185 00:25:38.185 ' 00:25:38.185 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:38.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:38.185 --rc genhtml_branch_coverage=1 00:25:38.185 --rc genhtml_function_coverage=1 00:25:38.185 --rc genhtml_legend=1 00:25:38.185 --rc geninfo_all_blocks=1 00:25:38.185 --rc geninfo_unexecuted_blocks=1 00:25:38.185 00:25:38.185 ' 00:25:38.185 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:38.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:38.185 --rc genhtml_branch_coverage=1 00:25:38.185 --rc genhtml_function_coverage=1 00:25:38.185 --rc genhtml_legend=1 00:25:38.185 --rc geninfo_all_blocks=1 00:25:38.185 --rc geninfo_unexecuted_blocks=1 00:25:38.185 00:25:38.185 ' 00:25:38.185 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:38.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:38.185 --rc genhtml_branch_coverage=1 00:25:38.185 --rc genhtml_function_coverage=1 00:25:38.185 --rc genhtml_legend=1 00:25:38.185 --rc geninfo_all_blocks=1 00:25:38.185 --rc geninfo_unexecuted_blocks=1 00:25:38.185 00:25:38.185 ' 00:25:38.185 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:38.185 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:25:38.185 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:38.185 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:38.185 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:38.185 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:38.185 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:38.185 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:38.185 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:38.185 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:38.185 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:38.185 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:38.185 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:25:38.185 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:25:38.185 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:38.185 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:38.185 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:38.185 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:38.185 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:38.185 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:25:38.185 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:38.185 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:38.185 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:38.185 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:38.185 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:38.185 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:38.185 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:25:38.185 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:38.185 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:25:38.185 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:38.185 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:38.185 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:38.185 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:38.185 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:38.185 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:38.185 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:38.185 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:38.185 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:38.185 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:38.185 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:25:38.185 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:25:38.185 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:25:38.185 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:25:38.185 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:25:38.185 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:25:38.185 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:25:38.185 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:38.185 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:38.185 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:38.185 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:38.185 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:38.185 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:38.185 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:38.185 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:38.185 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:25:38.185 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:25:38.185 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:25:38.185 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:25:38.185 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:25:38.185 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@460 -- # nvmf_veth_init 00:25:38.185 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:38.185 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:25:38.185 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:25:38.185 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:25:38.185 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:38.185 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:25:38.185 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:38.186 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:25:38.186 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:38.186 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:25:38.186 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:38.186 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:38.186 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:38.186 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:38.186 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:38.186 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:38.186 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:25:38.186 Cannot find device "nvmf_init_br" 00:25:38.186 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:25:38.186 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:25:38.186 Cannot find device "nvmf_init_br2" 00:25:38.186 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:25:38.186 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:25:38.186 Cannot find device "nvmf_tgt_br" 00:25:38.186 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 00:25:38.186 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:25:38.186 Cannot find device "nvmf_tgt_br2" 00:25:38.186 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 00:25:38.186 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:25:38.186 Cannot find device "nvmf_init_br" 00:25:38.186 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:25:38.186 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:25:38.446 Cannot find device "nvmf_init_br2" 00:25:38.446 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:25:38.446 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:25:38.446 Cannot find device "nvmf_tgt_br" 00:25:38.446 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 00:25:38.446 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:25:38.446 Cannot find device "nvmf_tgt_br2" 00:25:38.446 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 00:25:38.446 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:25:38.446 Cannot find device "nvmf_br" 00:25:38.446 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 00:25:38.446 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:25:38.446 Cannot find device "nvmf_init_if" 00:25:38.446 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 00:25:38.446 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:25:38.446 Cannot find device "nvmf_init_if2" 00:25:38.446 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 00:25:38.446 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:38.446 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:38.446 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 00:25:38.446 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:38.446 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:38.446 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 00:25:38.446 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:25:38.446 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:38.446 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:25:38.446 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:38.446 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:38.446 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:38.446 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:38.446 14:30:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:38.446 14:30:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:25:38.446 14:30:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:25:38.446 14:30:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:25:38.446 14:30:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:25:38.446 14:30:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:25:38.446 14:30:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:25:38.446 14:30:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:25:38.446 14:30:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:25:38.446 14:30:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:25:38.446 14:30:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:38.446 14:30:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:38.706 14:30:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:38.706 14:30:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:25:38.706 14:30:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:25:38.706 14:30:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:25:38.706 14:30:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:25:38.706 14:30:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:38.706 14:30:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:38.706 14:30:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:38.706 14:30:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:25:38.706 14:30:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:25:38.706 14:30:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:25:38.706 14:30:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:38.706 14:30:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:25:38.706 14:30:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:25:38.706 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:38.706 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.108 ms 00:25:38.706 00:25:38.706 --- 10.0.0.3 ping statistics --- 00:25:38.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:38.706 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:25:38.706 14:30:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:25:38.706 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:25:38.706 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.085 ms 00:25:38.706 00:25:38.706 --- 10.0.0.4 ping statistics --- 00:25:38.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:38.706 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:25:38.706 14:30:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:38.706 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:38.706 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:25:38.706 00:25:38.706 --- 10.0.0.1 ping statistics --- 00:25:38.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:38.706 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:25:38.706 14:30:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:25:38.706 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:38.706 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:25:38.706 00:25:38.706 --- 10.0.0.2 ping statistics --- 00:25:38.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:38.706 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:25:38.706 14:30:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:38.706 14:30:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@461 -- # return 0 00:25:38.706 14:30:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:38.706 14:30:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:38.706 14:30:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:38.706 14:30:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:38.706 14:30:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:38.706 14:30:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:38.706 14:30:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:38.706 14:30:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:25:38.706 14:30:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:38.706 14:30:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:38.706 14:30:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:38.706 14:30:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=84167 00:25:38.706 14:30:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:38.706 14:30:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 84167 00:25:38.706 14:30:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 84167 ']' 00:25:38.706 14:30:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:38.706 14:30:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:38.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:38.706 14:30:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:38.706 14:30:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:38.706 14:30:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:38.966 [2024-11-06 14:30:06.400520] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:25:38.966 [2024-11-06 14:30:06.400646] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:38.966 [2024-11-06 14:30:06.586181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:39.225 [2024-11-06 14:30:06.737486] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:39.225 [2024-11-06 14:30:06.737537] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:39.225 [2024-11-06 14:30:06.737554] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:39.225 [2024-11-06 14:30:06.737573] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:39.225 [2024-11-06 14:30:06.737586] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:39.225 [2024-11-06 14:30:06.739031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:39.484 [2024-11-06 14:30:06.995730] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:39.743 14:30:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:39.743 14:30:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:25:39.743 14:30:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:39.743 14:30:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:39.743 14:30:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:39.743 14:30:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:39.743 14:30:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:25:39.743 14:30:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.743 14:30:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:39.743 [2024-11-06 14:30:07.275608] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:39.743 [2024-11-06 14:30:07.283775] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:25:39.743 null0 00:25:39.743 [2024-11-06 14:30:07.315652] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:39.743 14:30:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.743 14:30:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:25:39.743 14:30:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=84199 00:25:39.743 14:30:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 84199 /tmp/host.sock 00:25:39.743 14:30:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 84199 ']' 00:25:39.743 14:30:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:25:39.743 14:30:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:39.743 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:39.743 14:30:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:39.743 14:30:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:39.743 14:30:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:40.002 [2024-11-06 14:30:07.461766] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:25:40.002 [2024-11-06 14:30:07.461970] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84199 ] 00:25:40.261 [2024-11-06 14:30:07.671544] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:40.261 [2024-11-06 14:30:07.811433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:40.828 14:30:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:40.828 14:30:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:25:40.828 14:30:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:40.828 14:30:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:25:40.828 14:30:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.828 14:30:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:40.828 14:30:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.828 14:30:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:25:40.828 14:30:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.828 14:30:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:41.087 [2024-11-06 14:30:08.520725] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:41.087 14:30:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.087 14:30:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:25:41.087 14:30:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.087 14:30:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:42.464 [2024-11-06 14:30:09.676519] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:25:42.464 [2024-11-06 14:30:09.676586] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:25:42.464 [2024-11-06 14:30:09.676643] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:25:42.464 [2024-11-06 14:30:09.682609] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:25:42.464 [2024-11-06 14:30:09.744650] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:25:42.464 [2024-11-06 14:30:09.746825] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x61500002b500:1 started. 00:25:42.464 [2024-11-06 14:30:09.749827] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:42.464 [2024-11-06 14:30:09.750166] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:42.464 [2024-11-06 14:30:09.750363] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:42.464 [2024-11-06 14:30:09.750543] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:25:42.464 [2024-11-06 14:30:09.750722] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:25:42.464 14:30:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.464 14:30:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:25:42.464 14:30:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:42.464 [2024-11-06 14:30:09.756134] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x61500002b500 was disconnected and freed. delete nvme_qpair. 00:25:42.464 14:30:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:42.464 14:30:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:42.464 14:30:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:42.464 14:30:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.464 14:30:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:42.464 14:30:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:42.464 14:30:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.464 14:30:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:25:42.464 14:30:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 00:25:42.464 14:30:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:25:42.464 14:30:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:25:42.464 14:30:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:42.464 14:30:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:42.464 14:30:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:42.464 14:30:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:42.464 14:30:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.464 14:30:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:42.464 14:30:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:42.464 14:30:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.464 14:30:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:42.464 14:30:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:43.401 14:30:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:43.401 14:30:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:43.401 14:30:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:43.401 14:30:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.401 14:30:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:43.401 14:30:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:43.401 14:30:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:43.401 14:30:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.401 14:30:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:43.401 14:30:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:44.339 14:30:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:44.339 14:30:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:44.339 14:30:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.339 14:30:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:44.339 14:30:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:44.339 14:30:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:44.339 14:30:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:44.598 14:30:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.598 14:30:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:44.598 14:30:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:45.536 14:30:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:45.536 14:30:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:45.536 14:30:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:45.536 14:30:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.536 14:30:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:45.536 14:30:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:45.536 14:30:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:45.536 14:30:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.536 14:30:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:45.536 14:30:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:46.473 14:30:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:46.473 14:30:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:46.473 14:30:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:46.473 14:30:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:46.473 14:30:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:46.473 14:30:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.473 14:30:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:46.473 14:30:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.473 14:30:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:46.473 14:30:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:47.851 14:30:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:47.851 14:30:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:47.851 14:30:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.851 14:30:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:47.851 14:30:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:47.851 14:30:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:47.851 14:30:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:47.851 14:30:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.851 14:30:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:47.851 14:30:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:47.851 [2024-11-06 14:30:15.166534] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:25:47.851 [2024-11-06 14:30:15.166622] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:47.851 [2024-11-06 14:30:15.166642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.851 [2024-11-06 14:30:15.166661] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:47.851 [2024-11-06 14:30:15.166674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.851 [2024-11-06 14:30:15.166688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:47.851 [2024-11-06 14:30:15.166700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.851 [2024-11-06 14:30:15.166713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:47.851 [2024-11-06 14:30:15.166724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.851 [2024-11-06 14:30:15.166737] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:47.851 [2024-11-06 14:30:15.166748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.851 [2024-11-06 14:30:15.166760] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b000 is same with the state(6) to be set 00:25:47.851 [2024-11-06 14:30:15.176506] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b000 (9): Bad file descriptor 00:25:47.851 [2024-11-06 14:30:15.186512] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:47.851 [2024-11-06 14:30:15.186689] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:47.851 [2024-11-06 14:30:15.186705] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:47.851 [2024-11-06 14:30:15.186724] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:47.851 [2024-11-06 14:30:15.186793] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:48.789 14:30:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:48.789 14:30:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:48.789 14:30:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:48.789 14:30:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:48.789 14:30:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.789 14:30:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:48.789 14:30:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:48.789 [2024-11-06 14:30:16.233932] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:25:48.789 [2024-11-06 14:30:16.234099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002b000 with addr=10.0.0.3, port=4420 00:25:48.789 [2024-11-06 14:30:16.234178] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b000 is same with the state(6) to be set 00:25:48.789 [2024-11-06 14:30:16.234294] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b000 (9): Bad file descriptor 00:25:48.789 [2024-11-06 14:30:16.235817] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:25:48.789 [2024-11-06 14:30:16.235988] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:48.789 [2024-11-06 14:30:16.236034] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:48.789 [2024-11-06 14:30:16.236084] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:48.789 [2024-11-06 14:30:16.236121] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:48.789 [2024-11-06 14:30:16.236149] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:48.789 [2024-11-06 14:30:16.236173] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:48.789 [2024-11-06 14:30:16.236209] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:48.789 [2024-11-06 14:30:16.236243] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:48.789 14:30:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.789 14:30:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:48.789 14:30:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:49.725 [2024-11-06 14:30:17.234775] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:49.725 [2024-11-06 14:30:17.234864] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:49.725 [2024-11-06 14:30:17.234899] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:49.725 [2024-11-06 14:30:17.234912] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:49.725 [2024-11-06 14:30:17.234926] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:25:49.725 [2024-11-06 14:30:17.234940] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:49.725 [2024-11-06 14:30:17.234950] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:49.725 [2024-11-06 14:30:17.234959] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:49.725 [2024-11-06 14:30:17.235019] bdev_nvme.c:7135:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 00:25:49.725 [2024-11-06 14:30:17.235086] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.725 [2024-11-06 14:30:17.235105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.725 [2024-11-06 14:30:17.235134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.725 [2024-11-06 14:30:17.235146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.725 [2024-11-06 14:30:17.235160] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.725 [2024-11-06 14:30:17.235172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.725 [2024-11-06 14:30:17.235185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.725 [2024-11-06 14:30:17.235196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.725 [2024-11-06 14:30:17.235210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.725 [2024-11-06 14:30:17.235222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.725 [2024-11-06 14:30:17.235247] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:25:49.725 [2024-11-06 14:30:17.235309] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:25:49.725 [2024-11-06 14:30:17.236304] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:25:49.725 [2024-11-06 14:30:17.236335] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:25:49.725 14:30:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:49.725 14:30:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:49.725 14:30:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:49.725 14:30:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:49.725 14:30:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:49.725 14:30:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.725 14:30:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:49.725 14:30:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.725 14:30:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:25:49.725 14:30:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:25:49.725 14:30:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:49.725 14:30:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:25:49.725 14:30:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:49.725 14:30:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:49.725 14:30:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:49.725 14:30:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:49.984 14:30:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:49.984 14:30:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.984 14:30:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:49.984 14:30:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.984 14:30:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:49.984 14:30:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:50.921 14:30:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:50.921 14:30:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:50.921 14:30:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.921 14:30:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:50.921 14:30:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:50.921 14:30:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:50.921 14:30:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:50.921 14:30:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.921 14:30:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:50.921 14:30:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:51.857 [2024-11-06 14:30:19.245350] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:25:51.857 [2024-11-06 14:30:19.245393] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:25:51.857 [2024-11-06 14:30:19.245428] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:25:51.857 [2024-11-06 14:30:19.251405] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 00:25:51.857 [2024-11-06 14:30:19.314074] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4420 00:25:51.857 [2024-11-06 14:30:19.315752] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x61500002c180:1 started. 00:25:51.857 [2024-11-06 14:30:19.318145] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:51.857 [2024-11-06 14:30:19.318323] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:51.857 [2024-11-06 14:30:19.318413] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:51.857 [2024-11-06 14:30:19.318524] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 00:25:51.857 [2024-11-06 14:30:19.318638] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:25:51.857 [2024-11-06 14:30:19.324581] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x61500002c180 was disconnected and freed. delete nvme_qpair. 00:25:51.857 14:30:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:51.857 14:30:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:51.857 14:30:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.857 14:30:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:51.857 14:30:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:51.857 14:30:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:51.857 14:30:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:52.116 14:30:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.116 14:30:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:25:52.116 14:30:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:25:52.116 14:30:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 84199 00:25:52.116 14:30:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 84199 ']' 00:25:52.116 14:30:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 84199 00:25:52.116 14:30:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:25:52.116 14:30:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:52.116 14:30:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 84199 00:25:52.116 killing process with pid 84199 00:25:52.116 14:30:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:52.116 14:30:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:52.116 14:30:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 84199' 00:25:52.116 14:30:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 84199 00:25:52.116 14:30:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 84199 00:25:53.526 14:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:25:53.526 14:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:53.526 14:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:25:53.526 14:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:53.526 14:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:25:53.526 14:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:53.526 14:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:53.526 rmmod nvme_tcp 00:25:53.526 rmmod nvme_fabrics 00:25:53.526 rmmod nvme_keyring 00:25:53.526 14:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:53.526 14:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:25:53.526 14:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:25:53.526 14:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 84167 ']' 00:25:53.526 14:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 84167 00:25:53.526 14:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 84167 ']' 00:25:53.526 14:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 84167 00:25:53.526 14:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:25:53.526 14:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:53.526 14:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 84167 00:25:53.526 killing process with pid 84167 00:25:53.526 14:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:25:53.526 14:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:25:53.526 14:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 84167' 00:25:53.526 14:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 84167 00:25:53.526 14:30:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 84167 00:25:54.905 14:30:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:54.905 14:30:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:54.905 14:30:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:54.905 14:30:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:25:54.905 14:30:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:25:54.905 14:30:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:54.905 14:30:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:25:54.905 14:30:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:54.905 14:30:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:25:54.905 14:30:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:25:54.905 14:30:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:25:54.905 14:30:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:25:54.905 14:30:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:25:54.905 14:30:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:25:54.905 14:30:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:25:54.905 14:30:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:25:54.905 14:30:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:25:54.905 14:30:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:25:54.905 14:30:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:25:54.905 14:30:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:25:54.905 14:30:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:54.905 14:30:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:54.905 14:30:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:25:54.905 14:30:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:54.905 14:30:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:54.905 14:30:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:54.905 14:30:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 00:25:54.905 00:25:54.905 real 0m16.993s 00:25:54.905 user 0m26.998s 00:25:54.905 sys 0m3.870s 00:25:54.905 14:30:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:54.905 14:30:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:54.905 ************************************ 00:25:54.905 END TEST nvmf_discovery_remove_ifc 00:25:54.905 ************************************ 00:25:54.905 14:30:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:54.905 14:30:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:25:54.905 14:30:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:54.905 14:30:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.905 ************************************ 00:25:54.905 START TEST nvmf_identify_kernel_target 00:25:54.905 ************************************ 00:25:54.905 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:55.166 * Looking for test storage... 00:25:55.166 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:55.166 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:55.166 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lcov --version 00:25:55.166 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:55.166 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:55.166 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:55.166 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:55.166 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:55.166 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:25:55.166 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:25:55.166 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:25:55.166 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:25:55.166 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:25:55.166 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:25:55.166 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:25:55.166 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:55.166 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:25:55.166 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:25:55.166 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:55.166 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:55.166 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:25:55.166 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:25:55.166 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:55.166 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:25:55.166 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:25:55.166 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:25:55.166 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:25:55.166 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:55.166 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:25:55.166 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:25:55.166 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:55.166 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:55.166 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:25:55.166 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:55.166 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:55.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:55.166 --rc genhtml_branch_coverage=1 00:25:55.166 --rc genhtml_function_coverage=1 00:25:55.166 --rc genhtml_legend=1 00:25:55.166 --rc geninfo_all_blocks=1 00:25:55.166 --rc geninfo_unexecuted_blocks=1 00:25:55.166 00:25:55.166 ' 00:25:55.166 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:55.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:55.166 --rc genhtml_branch_coverage=1 00:25:55.166 --rc genhtml_function_coverage=1 00:25:55.166 --rc genhtml_legend=1 00:25:55.166 --rc geninfo_all_blocks=1 00:25:55.166 --rc geninfo_unexecuted_blocks=1 00:25:55.166 00:25:55.166 ' 00:25:55.166 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:55.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:55.166 --rc genhtml_branch_coverage=1 00:25:55.166 --rc genhtml_function_coverage=1 00:25:55.166 --rc genhtml_legend=1 00:25:55.166 --rc geninfo_all_blocks=1 00:25:55.166 --rc geninfo_unexecuted_blocks=1 00:25:55.166 00:25:55.166 ' 00:25:55.166 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:55.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:55.166 --rc genhtml_branch_coverage=1 00:25:55.166 --rc genhtml_function_coverage=1 00:25:55.166 --rc genhtml_legend=1 00:25:55.166 --rc geninfo_all_blocks=1 00:25:55.166 --rc geninfo_unexecuted_blocks=1 00:25:55.166 00:25:55.166 ' 00:25:55.167 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:55.167 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:25:55.167 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:55.167 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:55.167 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:55.167 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:55.167 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:55.167 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:55.167 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:55.167 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:55.167 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:55.167 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:55.167 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:25:55.167 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:25:55.167 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:55.167 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:55.167 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:55.167 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:55.167 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:55.167 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:25:55.167 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:55.167 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:55.167 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:55.167 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:55.167 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:55.167 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:55.167 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:25:55.167 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:55.167 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:25:55.167 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:55.167 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:55.167 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:55.167 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:55.167 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:55.167 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:55.167 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:55.167 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:55.167 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:55.167 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:55.167 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:25:55.167 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:55.167 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:55.167 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:55.167 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:55.167 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:55.167 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:55.167 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:55.167 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:55.167 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:25:55.167 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:25:55.167 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:25:55.167 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:25:55.167 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:25:55.167 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:25:55.167 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:55.167 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:25:55.167 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:25:55.167 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:25:55.167 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:55.167 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:25:55.167 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:55.167 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:25:55.167 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:55.167 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:25:55.167 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:55.167 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:55.167 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:55.167 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:55.167 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:55.167 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:55.167 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:25:55.428 Cannot find device "nvmf_init_br" 00:25:55.428 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:25:55.428 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:25:55.428 Cannot find device "nvmf_init_br2" 00:25:55.428 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:25:55.428 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:25:55.428 Cannot find device "nvmf_tgt_br" 00:25:55.428 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 00:25:55.428 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:25:55.428 Cannot find device "nvmf_tgt_br2" 00:25:55.428 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 00:25:55.428 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:25:55.428 Cannot find device "nvmf_init_br" 00:25:55.428 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 00:25:55.428 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:25:55.428 Cannot find device "nvmf_init_br2" 00:25:55.428 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 00:25:55.428 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:25:55.428 Cannot find device "nvmf_tgt_br" 00:25:55.428 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 00:25:55.428 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:25:55.428 Cannot find device "nvmf_tgt_br2" 00:25:55.428 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 00:25:55.428 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:25:55.428 Cannot find device "nvmf_br" 00:25:55.428 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 00:25:55.428 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:25:55.428 Cannot find device "nvmf_init_if" 00:25:55.428 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 00:25:55.428 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:25:55.428 Cannot find device "nvmf_init_if2" 00:25:55.428 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 00:25:55.428 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:55.428 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:55.428 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 00:25:55.428 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:55.428 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:55.428 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 00:25:55.428 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:25:55.428 14:30:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:55.428 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:25:55.428 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:55.428 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:55.428 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:55.688 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:55.688 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:55.688 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:25:55.688 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:25:55.688 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:25:55.688 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:25:55.688 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:25:55.688 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:25:55.688 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:25:55.688 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:25:55.688 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:25:55.688 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:55.688 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:55.688 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:55.688 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:25:55.688 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:25:55.688 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:25:55.688 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:25:55.688 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:55.688 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:55.688 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:55.688 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:25:55.688 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:25:55.688 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:25:55.688 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:55.688 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:25:55.688 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:25:55.688 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:55.688 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.105 ms 00:25:55.688 00:25:55.688 --- 10.0.0.3 ping statistics --- 00:25:55.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:55.688 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:25:55.688 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:25:55.688 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:25:55.688 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.097 ms 00:25:55.688 00:25:55.688 --- 10.0.0.4 ping statistics --- 00:25:55.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:55.688 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:25:55.688 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:55.688 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:55.688 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:25:55.688 00:25:55.688 --- 10.0.0.1 ping statistics --- 00:25:55.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:55.688 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:25:55.688 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:25:55.688 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:55.688 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.147 ms 00:25:55.688 00:25:55.688 --- 10.0.0.2 ping statistics --- 00:25:55.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:55.688 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:25:55.688 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:55.688 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@461 -- # return 0 00:25:55.688 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:55.688 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:55.688 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:55.688 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:55.688 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:55.688 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:55.688 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:55.948 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:25:55.948 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:25:55.948 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:25:55.948 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:55.948 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:55.948 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:55.948 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:55.948 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:55.948 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:55.948 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:55.948 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:55.948 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:55.948 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:25:55.948 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:25:55.948 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:25:55.948 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:25:55.948 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:55.948 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:55.948 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:55.948 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:25:55.948 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:25:55.948 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:25:55.948 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:55.948 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:25:56.207 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:56.466 Waiting for block devices as requested 00:25:56.466 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:25:56.466 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:25:56.725 14:30:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:56.725 14:30:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:56.725 14:30:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:25:56.725 14:30:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:25:56.725 14:30:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:56.725 14:30:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:25:56.725 14:30:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:25:56.725 14:30:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:25:56.725 14:30:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:25:56.725 No valid GPT data, bailing 00:25:56.725 14:30:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:56.725 14:30:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:25:56.725 14:30:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:25:56.725 14:30:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:25:56.725 14:30:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:56.725 14:30:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:25:56.725 14:30:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:25:56.725 14:30:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:25:56.725 14:30:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:25:56.725 14:30:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:25:56.726 14:30:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:25:56.726 14:30:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:25:56.726 14:30:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:25:56.726 No valid GPT data, bailing 00:25:56.726 14:30:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:25:56.726 14:30:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:25:56.726 14:30:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:25:56.726 14:30:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:25:56.726 14:30:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:56.726 14:30:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:25:56.726 14:30:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:25:56.726 14:30:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:25:56.726 14:30:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:25:56.726 14:30:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:25:56.726 14:30:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:25:56.726 14:30:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:25:56.726 14:30:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:25:56.726 No valid GPT data, bailing 00:25:56.726 14:30:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:25:56.726 14:30:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:25:56.726 14:30:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:25:56.726 14:30:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:25:56.726 14:30:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:56.726 14:30:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:25:56.726 14:30:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:25:56.726 14:30:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:25:56.726 14:30:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:25:56.726 14:30:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:25:56.726 14:30:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:25:56.726 14:30:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:25:56.726 14:30:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:25:56.985 No valid GPT data, bailing 00:25:56.985 14:30:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:25:56.985 14:30:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:25:56.985 14:30:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:25:56.985 14:30:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:25:56.985 14:30:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:25:56.985 14:30:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:56.985 14:30:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:56.985 14:30:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:56.985 14:30:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:25:56.985 14:30:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:25:56.985 14:30:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:25:56.985 14:30:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:25:56.985 14:30:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:25:56.985 14:30:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:25:56.985 14:30:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:25:56.985 14:30:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:25:56.985 14:30:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:56.985 14:30:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid=406d54d0-5e94-472a-a2b3-4291f3ac81e0 -a 10.0.0.1 -t tcp -s 4420 00:25:56.985 00:25:56.985 Discovery Log Number of Records 2, Generation counter 2 00:25:56.985 =====Discovery Log Entry 0====== 00:25:56.985 trtype: tcp 00:25:56.985 adrfam: ipv4 00:25:56.985 subtype: current discovery subsystem 00:25:56.985 treq: not specified, sq flow control disable supported 00:25:56.985 portid: 1 00:25:56.985 trsvcid: 4420 00:25:56.985 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:56.985 traddr: 10.0.0.1 00:25:56.985 eflags: none 00:25:56.985 sectype: none 00:25:56.985 =====Discovery Log Entry 1====== 00:25:56.985 trtype: tcp 00:25:56.985 adrfam: ipv4 00:25:56.985 subtype: nvme subsystem 00:25:56.985 treq: not specified, sq flow control disable supported 00:25:56.985 portid: 1 00:25:56.985 trsvcid: 4420 00:25:56.985 subnqn: nqn.2016-06.io.spdk:testnqn 00:25:56.985 traddr: 10.0.0.1 00:25:56.985 eflags: none 00:25:56.985 sectype: none 00:25:56.985 14:30:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:25:56.985 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:25:57.245 ===================================================== 00:25:57.245 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:57.245 ===================================================== 00:25:57.245 Controller Capabilities/Features 00:25:57.245 ================================ 00:25:57.245 Vendor ID: 0000 00:25:57.245 Subsystem Vendor ID: 0000 00:25:57.245 Serial Number: 067fbb144d66581d39b8 00:25:57.245 Model Number: Linux 00:25:57.245 Firmware Version: 6.8.9-20 00:25:57.245 Recommended Arb Burst: 0 00:25:57.245 IEEE OUI Identifier: 00 00 00 00:25:57.245 Multi-path I/O 00:25:57.245 May have multiple subsystem ports: No 00:25:57.245 May have multiple controllers: No 00:25:57.245 Associated with SR-IOV VF: No 00:25:57.245 Max Data Transfer Size: Unlimited 00:25:57.245 Max Number of Namespaces: 0 00:25:57.245 Max Number of I/O Queues: 1024 00:25:57.245 NVMe Specification Version (VS): 1.3 00:25:57.245 NVMe Specification Version (Identify): 1.3 00:25:57.245 Maximum Queue Entries: 1024 00:25:57.245 Contiguous Queues Required: No 00:25:57.245 Arbitration Mechanisms Supported 00:25:57.245 Weighted Round Robin: Not Supported 00:25:57.245 Vendor Specific: Not Supported 00:25:57.245 Reset Timeout: 7500 ms 00:25:57.245 Doorbell Stride: 4 bytes 00:25:57.245 NVM Subsystem Reset: Not Supported 00:25:57.245 Command Sets Supported 00:25:57.245 NVM Command Set: Supported 00:25:57.245 Boot Partition: Not Supported 00:25:57.245 Memory Page Size Minimum: 4096 bytes 00:25:57.245 Memory Page Size Maximum: 4096 bytes 00:25:57.245 Persistent Memory Region: Not Supported 00:25:57.245 Optional Asynchronous Events Supported 00:25:57.245 Namespace Attribute Notices: Not Supported 00:25:57.245 Firmware Activation Notices: Not Supported 00:25:57.245 ANA Change Notices: Not Supported 00:25:57.245 PLE Aggregate Log Change Notices: Not Supported 00:25:57.246 LBA Status Info Alert Notices: Not Supported 00:25:57.246 EGE Aggregate Log Change Notices: Not Supported 00:25:57.246 Normal NVM Subsystem Shutdown event: Not Supported 00:25:57.246 Zone Descriptor Change Notices: Not Supported 00:25:57.246 Discovery Log Change Notices: Supported 00:25:57.246 Controller Attributes 00:25:57.246 128-bit Host Identifier: Not Supported 00:25:57.246 Non-Operational Permissive Mode: Not Supported 00:25:57.246 NVM Sets: Not Supported 00:25:57.246 Read Recovery Levels: Not Supported 00:25:57.246 Endurance Groups: Not Supported 00:25:57.246 Predictable Latency Mode: Not Supported 00:25:57.246 Traffic Based Keep ALive: Not Supported 00:25:57.246 Namespace Granularity: Not Supported 00:25:57.246 SQ Associations: Not Supported 00:25:57.246 UUID List: Not Supported 00:25:57.246 Multi-Domain Subsystem: Not Supported 00:25:57.246 Fixed Capacity Management: Not Supported 00:25:57.246 Variable Capacity Management: Not Supported 00:25:57.246 Delete Endurance Group: Not Supported 00:25:57.246 Delete NVM Set: Not Supported 00:25:57.246 Extended LBA Formats Supported: Not Supported 00:25:57.246 Flexible Data Placement Supported: Not Supported 00:25:57.246 00:25:57.246 Controller Memory Buffer Support 00:25:57.246 ================================ 00:25:57.246 Supported: No 00:25:57.246 00:25:57.246 Persistent Memory Region Support 00:25:57.246 ================================ 00:25:57.246 Supported: No 00:25:57.246 00:25:57.246 Admin Command Set Attributes 00:25:57.246 ============================ 00:25:57.246 Security Send/Receive: Not Supported 00:25:57.246 Format NVM: Not Supported 00:25:57.246 Firmware Activate/Download: Not Supported 00:25:57.246 Namespace Management: Not Supported 00:25:57.246 Device Self-Test: Not Supported 00:25:57.246 Directives: Not Supported 00:25:57.246 NVMe-MI: Not Supported 00:25:57.246 Virtualization Management: Not Supported 00:25:57.246 Doorbell Buffer Config: Not Supported 00:25:57.246 Get LBA Status Capability: Not Supported 00:25:57.246 Command & Feature Lockdown Capability: Not Supported 00:25:57.246 Abort Command Limit: 1 00:25:57.246 Async Event Request Limit: 1 00:25:57.246 Number of Firmware Slots: N/A 00:25:57.246 Firmware Slot 1 Read-Only: N/A 00:25:57.246 Firmware Activation Without Reset: N/A 00:25:57.246 Multiple Update Detection Support: N/A 00:25:57.246 Firmware Update Granularity: No Information Provided 00:25:57.246 Per-Namespace SMART Log: No 00:25:57.246 Asymmetric Namespace Access Log Page: Not Supported 00:25:57.246 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:57.246 Command Effects Log Page: Not Supported 00:25:57.246 Get Log Page Extended Data: Supported 00:25:57.246 Telemetry Log Pages: Not Supported 00:25:57.246 Persistent Event Log Pages: Not Supported 00:25:57.246 Supported Log Pages Log Page: May Support 00:25:57.246 Commands Supported & Effects Log Page: Not Supported 00:25:57.246 Feature Identifiers & Effects Log Page:May Support 00:25:57.246 NVMe-MI Commands & Effects Log Page: May Support 00:25:57.246 Data Area 4 for Telemetry Log: Not Supported 00:25:57.246 Error Log Page Entries Supported: 1 00:25:57.246 Keep Alive: Not Supported 00:25:57.246 00:25:57.246 NVM Command Set Attributes 00:25:57.246 ========================== 00:25:57.246 Submission Queue Entry Size 00:25:57.246 Max: 1 00:25:57.246 Min: 1 00:25:57.246 Completion Queue Entry Size 00:25:57.246 Max: 1 00:25:57.246 Min: 1 00:25:57.246 Number of Namespaces: 0 00:25:57.246 Compare Command: Not Supported 00:25:57.246 Write Uncorrectable Command: Not Supported 00:25:57.246 Dataset Management Command: Not Supported 00:25:57.246 Write Zeroes Command: Not Supported 00:25:57.246 Set Features Save Field: Not Supported 00:25:57.246 Reservations: Not Supported 00:25:57.246 Timestamp: Not Supported 00:25:57.246 Copy: Not Supported 00:25:57.246 Volatile Write Cache: Not Present 00:25:57.246 Atomic Write Unit (Normal): 1 00:25:57.246 Atomic Write Unit (PFail): 1 00:25:57.246 Atomic Compare & Write Unit: 1 00:25:57.246 Fused Compare & Write: Not Supported 00:25:57.246 Scatter-Gather List 00:25:57.246 SGL Command Set: Supported 00:25:57.246 SGL Keyed: Not Supported 00:25:57.246 SGL Bit Bucket Descriptor: Not Supported 00:25:57.246 SGL Metadata Pointer: Not Supported 00:25:57.246 Oversized SGL: Not Supported 00:25:57.246 SGL Metadata Address: Not Supported 00:25:57.246 SGL Offset: Supported 00:25:57.246 Transport SGL Data Block: Not Supported 00:25:57.246 Replay Protected Memory Block: Not Supported 00:25:57.246 00:25:57.246 Firmware Slot Information 00:25:57.246 ========================= 00:25:57.246 Active slot: 0 00:25:57.246 00:25:57.246 00:25:57.246 Error Log 00:25:57.246 ========= 00:25:57.246 00:25:57.246 Active Namespaces 00:25:57.246 ================= 00:25:57.246 Discovery Log Page 00:25:57.246 ================== 00:25:57.246 Generation Counter: 2 00:25:57.246 Number of Records: 2 00:25:57.246 Record Format: 0 00:25:57.246 00:25:57.246 Discovery Log Entry 0 00:25:57.246 ---------------------- 00:25:57.246 Transport Type: 3 (TCP) 00:25:57.246 Address Family: 1 (IPv4) 00:25:57.246 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:57.246 Entry Flags: 00:25:57.246 Duplicate Returned Information: 0 00:25:57.246 Explicit Persistent Connection Support for Discovery: 0 00:25:57.246 Transport Requirements: 00:25:57.246 Secure Channel: Not Specified 00:25:57.246 Port ID: 1 (0x0001) 00:25:57.246 Controller ID: 65535 (0xffff) 00:25:57.246 Admin Max SQ Size: 32 00:25:57.246 Transport Service Identifier: 4420 00:25:57.246 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:57.246 Transport Address: 10.0.0.1 00:25:57.246 Discovery Log Entry 1 00:25:57.246 ---------------------- 00:25:57.246 Transport Type: 3 (TCP) 00:25:57.246 Address Family: 1 (IPv4) 00:25:57.246 Subsystem Type: 2 (NVM Subsystem) 00:25:57.246 Entry Flags: 00:25:57.246 Duplicate Returned Information: 0 00:25:57.246 Explicit Persistent Connection Support for Discovery: 0 00:25:57.246 Transport Requirements: 00:25:57.246 Secure Channel: Not Specified 00:25:57.246 Port ID: 1 (0x0001) 00:25:57.246 Controller ID: 65535 (0xffff) 00:25:57.246 Admin Max SQ Size: 32 00:25:57.246 Transport Service Identifier: 4420 00:25:57.246 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:25:57.246 Transport Address: 10.0.0.1 00:25:57.246 14:30:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:57.506 get_feature(0x01) failed 00:25:57.506 get_feature(0x02) failed 00:25:57.506 get_feature(0x04) failed 00:25:57.506 ===================================================== 00:25:57.506 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:25:57.506 ===================================================== 00:25:57.506 Controller Capabilities/Features 00:25:57.506 ================================ 00:25:57.506 Vendor ID: 0000 00:25:57.506 Subsystem Vendor ID: 0000 00:25:57.506 Serial Number: 0847795c72c89b119c50 00:25:57.506 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:25:57.506 Firmware Version: 6.8.9-20 00:25:57.506 Recommended Arb Burst: 6 00:25:57.506 IEEE OUI Identifier: 00 00 00 00:25:57.506 Multi-path I/O 00:25:57.506 May have multiple subsystem ports: Yes 00:25:57.506 May have multiple controllers: Yes 00:25:57.506 Associated with SR-IOV VF: No 00:25:57.506 Max Data Transfer Size: Unlimited 00:25:57.506 Max Number of Namespaces: 1024 00:25:57.506 Max Number of I/O Queues: 128 00:25:57.506 NVMe Specification Version (VS): 1.3 00:25:57.506 NVMe Specification Version (Identify): 1.3 00:25:57.506 Maximum Queue Entries: 1024 00:25:57.506 Contiguous Queues Required: No 00:25:57.506 Arbitration Mechanisms Supported 00:25:57.506 Weighted Round Robin: Not Supported 00:25:57.506 Vendor Specific: Not Supported 00:25:57.506 Reset Timeout: 7500 ms 00:25:57.506 Doorbell Stride: 4 bytes 00:25:57.506 NVM Subsystem Reset: Not Supported 00:25:57.506 Command Sets Supported 00:25:57.506 NVM Command Set: Supported 00:25:57.506 Boot Partition: Not Supported 00:25:57.506 Memory Page Size Minimum: 4096 bytes 00:25:57.506 Memory Page Size Maximum: 4096 bytes 00:25:57.506 Persistent Memory Region: Not Supported 00:25:57.506 Optional Asynchronous Events Supported 00:25:57.506 Namespace Attribute Notices: Supported 00:25:57.506 Firmware Activation Notices: Not Supported 00:25:57.506 ANA Change Notices: Supported 00:25:57.506 PLE Aggregate Log Change Notices: Not Supported 00:25:57.506 LBA Status Info Alert Notices: Not Supported 00:25:57.506 EGE Aggregate Log Change Notices: Not Supported 00:25:57.506 Normal NVM Subsystem Shutdown event: Not Supported 00:25:57.506 Zone Descriptor Change Notices: Not Supported 00:25:57.506 Discovery Log Change Notices: Not Supported 00:25:57.506 Controller Attributes 00:25:57.506 128-bit Host Identifier: Supported 00:25:57.506 Non-Operational Permissive Mode: Not Supported 00:25:57.506 NVM Sets: Not Supported 00:25:57.506 Read Recovery Levels: Not Supported 00:25:57.506 Endurance Groups: Not Supported 00:25:57.506 Predictable Latency Mode: Not Supported 00:25:57.506 Traffic Based Keep ALive: Supported 00:25:57.506 Namespace Granularity: Not Supported 00:25:57.506 SQ Associations: Not Supported 00:25:57.506 UUID List: Not Supported 00:25:57.506 Multi-Domain Subsystem: Not Supported 00:25:57.506 Fixed Capacity Management: Not Supported 00:25:57.506 Variable Capacity Management: Not Supported 00:25:57.506 Delete Endurance Group: Not Supported 00:25:57.506 Delete NVM Set: Not Supported 00:25:57.506 Extended LBA Formats Supported: Not Supported 00:25:57.506 Flexible Data Placement Supported: Not Supported 00:25:57.506 00:25:57.506 Controller Memory Buffer Support 00:25:57.506 ================================ 00:25:57.506 Supported: No 00:25:57.506 00:25:57.506 Persistent Memory Region Support 00:25:57.506 ================================ 00:25:57.506 Supported: No 00:25:57.506 00:25:57.506 Admin Command Set Attributes 00:25:57.506 ============================ 00:25:57.506 Security Send/Receive: Not Supported 00:25:57.506 Format NVM: Not Supported 00:25:57.506 Firmware Activate/Download: Not Supported 00:25:57.506 Namespace Management: Not Supported 00:25:57.506 Device Self-Test: Not Supported 00:25:57.506 Directives: Not Supported 00:25:57.506 NVMe-MI: Not Supported 00:25:57.506 Virtualization Management: Not Supported 00:25:57.506 Doorbell Buffer Config: Not Supported 00:25:57.506 Get LBA Status Capability: Not Supported 00:25:57.506 Command & Feature Lockdown Capability: Not Supported 00:25:57.506 Abort Command Limit: 4 00:25:57.506 Async Event Request Limit: 4 00:25:57.506 Number of Firmware Slots: N/A 00:25:57.506 Firmware Slot 1 Read-Only: N/A 00:25:57.506 Firmware Activation Without Reset: N/A 00:25:57.506 Multiple Update Detection Support: N/A 00:25:57.506 Firmware Update Granularity: No Information Provided 00:25:57.506 Per-Namespace SMART Log: Yes 00:25:57.506 Asymmetric Namespace Access Log Page: Supported 00:25:57.506 ANA Transition Time : 10 sec 00:25:57.506 00:25:57.506 Asymmetric Namespace Access Capabilities 00:25:57.506 ANA Optimized State : Supported 00:25:57.506 ANA Non-Optimized State : Supported 00:25:57.506 ANA Inaccessible State : Supported 00:25:57.506 ANA Persistent Loss State : Supported 00:25:57.506 ANA Change State : Supported 00:25:57.506 ANAGRPID is not changed : No 00:25:57.506 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:25:57.506 00:25:57.506 ANA Group Identifier Maximum : 128 00:25:57.506 Number of ANA Group Identifiers : 128 00:25:57.506 Max Number of Allowed Namespaces : 1024 00:25:57.506 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:25:57.506 Command Effects Log Page: Supported 00:25:57.506 Get Log Page Extended Data: Supported 00:25:57.506 Telemetry Log Pages: Not Supported 00:25:57.506 Persistent Event Log Pages: Not Supported 00:25:57.506 Supported Log Pages Log Page: May Support 00:25:57.506 Commands Supported & Effects Log Page: Not Supported 00:25:57.506 Feature Identifiers & Effects Log Page:May Support 00:25:57.506 NVMe-MI Commands & Effects Log Page: May Support 00:25:57.506 Data Area 4 for Telemetry Log: Not Supported 00:25:57.506 Error Log Page Entries Supported: 128 00:25:57.506 Keep Alive: Supported 00:25:57.506 Keep Alive Granularity: 1000 ms 00:25:57.506 00:25:57.506 NVM Command Set Attributes 00:25:57.506 ========================== 00:25:57.506 Submission Queue Entry Size 00:25:57.506 Max: 64 00:25:57.506 Min: 64 00:25:57.506 Completion Queue Entry Size 00:25:57.506 Max: 16 00:25:57.506 Min: 16 00:25:57.506 Number of Namespaces: 1024 00:25:57.506 Compare Command: Not Supported 00:25:57.506 Write Uncorrectable Command: Not Supported 00:25:57.506 Dataset Management Command: Supported 00:25:57.507 Write Zeroes Command: Supported 00:25:57.507 Set Features Save Field: Not Supported 00:25:57.507 Reservations: Not Supported 00:25:57.507 Timestamp: Not Supported 00:25:57.507 Copy: Not Supported 00:25:57.507 Volatile Write Cache: Present 00:25:57.507 Atomic Write Unit (Normal): 1 00:25:57.507 Atomic Write Unit (PFail): 1 00:25:57.507 Atomic Compare & Write Unit: 1 00:25:57.507 Fused Compare & Write: Not Supported 00:25:57.507 Scatter-Gather List 00:25:57.507 SGL Command Set: Supported 00:25:57.507 SGL Keyed: Not Supported 00:25:57.507 SGL Bit Bucket Descriptor: Not Supported 00:25:57.507 SGL Metadata Pointer: Not Supported 00:25:57.507 Oversized SGL: Not Supported 00:25:57.507 SGL Metadata Address: Not Supported 00:25:57.507 SGL Offset: Supported 00:25:57.507 Transport SGL Data Block: Not Supported 00:25:57.507 Replay Protected Memory Block: Not Supported 00:25:57.507 00:25:57.507 Firmware Slot Information 00:25:57.507 ========================= 00:25:57.507 Active slot: 0 00:25:57.507 00:25:57.507 Asymmetric Namespace Access 00:25:57.507 =========================== 00:25:57.507 Change Count : 0 00:25:57.507 Number of ANA Group Descriptors : 1 00:25:57.507 ANA Group Descriptor : 0 00:25:57.507 ANA Group ID : 1 00:25:57.507 Number of NSID Values : 1 00:25:57.507 Change Count : 0 00:25:57.507 ANA State : 1 00:25:57.507 Namespace Identifier : 1 00:25:57.507 00:25:57.507 Commands Supported and Effects 00:25:57.507 ============================== 00:25:57.507 Admin Commands 00:25:57.507 -------------- 00:25:57.507 Get Log Page (02h): Supported 00:25:57.507 Identify (06h): Supported 00:25:57.507 Abort (08h): Supported 00:25:57.507 Set Features (09h): Supported 00:25:57.507 Get Features (0Ah): Supported 00:25:57.507 Asynchronous Event Request (0Ch): Supported 00:25:57.507 Keep Alive (18h): Supported 00:25:57.507 I/O Commands 00:25:57.507 ------------ 00:25:57.507 Flush (00h): Supported 00:25:57.507 Write (01h): Supported LBA-Change 00:25:57.507 Read (02h): Supported 00:25:57.507 Write Zeroes (08h): Supported LBA-Change 00:25:57.507 Dataset Management (09h): Supported 00:25:57.507 00:25:57.507 Error Log 00:25:57.507 ========= 00:25:57.507 Entry: 0 00:25:57.507 Error Count: 0x3 00:25:57.507 Submission Queue Id: 0x0 00:25:57.507 Command Id: 0x5 00:25:57.507 Phase Bit: 0 00:25:57.507 Status Code: 0x2 00:25:57.507 Status Code Type: 0x0 00:25:57.507 Do Not Retry: 1 00:25:57.507 Error Location: 0x28 00:25:57.507 LBA: 0x0 00:25:57.507 Namespace: 0x0 00:25:57.507 Vendor Log Page: 0x0 00:25:57.507 ----------- 00:25:57.507 Entry: 1 00:25:57.507 Error Count: 0x2 00:25:57.507 Submission Queue Id: 0x0 00:25:57.507 Command Id: 0x5 00:25:57.507 Phase Bit: 0 00:25:57.507 Status Code: 0x2 00:25:57.507 Status Code Type: 0x0 00:25:57.507 Do Not Retry: 1 00:25:57.507 Error Location: 0x28 00:25:57.507 LBA: 0x0 00:25:57.507 Namespace: 0x0 00:25:57.507 Vendor Log Page: 0x0 00:25:57.507 ----------- 00:25:57.507 Entry: 2 00:25:57.507 Error Count: 0x1 00:25:57.507 Submission Queue Id: 0x0 00:25:57.507 Command Id: 0x4 00:25:57.507 Phase Bit: 0 00:25:57.507 Status Code: 0x2 00:25:57.507 Status Code Type: 0x0 00:25:57.507 Do Not Retry: 1 00:25:57.507 Error Location: 0x28 00:25:57.507 LBA: 0x0 00:25:57.507 Namespace: 0x0 00:25:57.507 Vendor Log Page: 0x0 00:25:57.507 00:25:57.507 Number of Queues 00:25:57.507 ================ 00:25:57.507 Number of I/O Submission Queues: 128 00:25:57.507 Number of I/O Completion Queues: 128 00:25:57.507 00:25:57.507 ZNS Specific Controller Data 00:25:57.507 ============================ 00:25:57.507 Zone Append Size Limit: 0 00:25:57.507 00:25:57.507 00:25:57.507 Active Namespaces 00:25:57.507 ================= 00:25:57.507 get_feature(0x05) failed 00:25:57.507 Namespace ID:1 00:25:57.507 Command Set Identifier: NVM (00h) 00:25:57.507 Deallocate: Supported 00:25:57.507 Deallocated/Unwritten Error: Not Supported 00:25:57.507 Deallocated Read Value: Unknown 00:25:57.507 Deallocate in Write Zeroes: Not Supported 00:25:57.507 Deallocated Guard Field: 0xFFFF 00:25:57.507 Flush: Supported 00:25:57.507 Reservation: Not Supported 00:25:57.507 Namespace Sharing Capabilities: Multiple Controllers 00:25:57.507 Size (in LBAs): 1310720 (5GiB) 00:25:57.507 Capacity (in LBAs): 1310720 (5GiB) 00:25:57.507 Utilization (in LBAs): 1310720 (5GiB) 00:25:57.507 UUID: da8895bb-90ba-4ad9-a41d-ef2a4d93187c 00:25:57.507 Thin Provisioning: Not Supported 00:25:57.507 Per-NS Atomic Units: Yes 00:25:57.507 Atomic Boundary Size (Normal): 0 00:25:57.507 Atomic Boundary Size (PFail): 0 00:25:57.507 Atomic Boundary Offset: 0 00:25:57.507 NGUID/EUI64 Never Reused: No 00:25:57.507 ANA group ID: 1 00:25:57.507 Namespace Write Protected: No 00:25:57.507 Number of LBA Formats: 1 00:25:57.507 Current LBA Format: LBA Format #00 00:25:57.507 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:25:57.507 00:25:57.507 14:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:25:57.507 14:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:57.507 14:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:25:57.507 14:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:57.507 14:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:25:57.507 14:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:57.507 14:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:57.507 rmmod nvme_tcp 00:25:57.507 rmmod nvme_fabrics 00:25:57.507 14:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:57.767 14:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:25:57.767 14:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:25:57.767 14:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:25:57.767 14:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:57.767 14:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:57.767 14:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:57.767 14:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:25:57.767 14:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:25:57.767 14:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:57.767 14:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:25:57.767 14:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:57.767 14:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:25:57.767 14:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:25:57.767 14:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:25:57.767 14:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:25:57.767 14:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:25:57.767 14:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:25:57.767 14:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:25:57.767 14:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:25:57.767 14:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:25:57.767 14:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:25:57.767 14:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:25:57.767 14:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:25:57.767 14:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:57.767 14:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:58.026 14:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:25:58.026 14:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:58.026 14:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:58.026 14:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:58.026 14:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 00:25:58.026 14:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:25:58.026 14:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:25:58.026 14:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:25:58.026 14:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:58.027 14:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:58.027 14:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:58.027 14:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:58.027 14:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:25:58.027 14:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:25:58.027 14:30:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:58.964 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:58.964 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:25:58.964 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:25:58.964 ************************************ 00:25:58.964 END TEST nvmf_identify_kernel_target 00:25:58.964 ************************************ 00:25:58.964 00:25:58.964 real 0m4.035s 00:25:58.964 user 0m1.292s 00:25:58.964 sys 0m2.126s 00:25:58.964 14:30:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:58.964 14:30:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:59.224 14:30:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:59.224 14:30:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:25:59.224 14:30:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:59.224 14:30:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.224 ************************************ 00:25:59.224 START TEST nvmf_auth_host 00:25:59.224 ************************************ 00:25:59.224 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:59.224 * Looking for test storage... 00:25:59.224 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:59.224 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:59.224 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lcov --version 00:25:59.224 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:59.224 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:59.224 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:59.224 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:59.224 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:59.224 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:25:59.224 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:25:59.224 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:25:59.224 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:25:59.224 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:25:59.224 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:25:59.224 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:25:59.224 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:59.224 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:25:59.224 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:25:59.224 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:59.224 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:59.224 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:25:59.224 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:25:59.224 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:59.224 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:25:59.224 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:25:59.224 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:25:59.224 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:25:59.224 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:59.224 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:25:59.224 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:25:59.224 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:59.224 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:59.224 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:25:59.224 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:59.224 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:59.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:59.224 --rc genhtml_branch_coverage=1 00:25:59.224 --rc genhtml_function_coverage=1 00:25:59.224 --rc genhtml_legend=1 00:25:59.224 --rc geninfo_all_blocks=1 00:25:59.224 --rc geninfo_unexecuted_blocks=1 00:25:59.224 00:25:59.224 ' 00:25:59.224 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:59.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:59.224 --rc genhtml_branch_coverage=1 00:25:59.224 --rc genhtml_function_coverage=1 00:25:59.225 --rc genhtml_legend=1 00:25:59.225 --rc geninfo_all_blocks=1 00:25:59.225 --rc geninfo_unexecuted_blocks=1 00:25:59.225 00:25:59.225 ' 00:25:59.225 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:59.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:59.225 --rc genhtml_branch_coverage=1 00:25:59.225 --rc genhtml_function_coverage=1 00:25:59.225 --rc genhtml_legend=1 00:25:59.225 --rc geninfo_all_blocks=1 00:25:59.225 --rc geninfo_unexecuted_blocks=1 00:25:59.225 00:25:59.225 ' 00:25:59.225 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:59.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:59.225 --rc genhtml_branch_coverage=1 00:25:59.225 --rc genhtml_function_coverage=1 00:25:59.225 --rc genhtml_legend=1 00:25:59.225 --rc geninfo_all_blocks=1 00:25:59.225 --rc geninfo_unexecuted_blocks=1 00:25:59.225 00:25:59.225 ' 00:25:59.225 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:59.225 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:25:59.225 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:59.225 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:59.225 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:59.225 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:59.225 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:59.225 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:59.225 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:59.225 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:59.225 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:59.485 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:59.485 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:25:59.485 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:25:59.485 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:59.485 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:59.485 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:59.485 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:59.485 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:59.485 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:25:59.485 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:59.485 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:59.485 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:59.485 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.485 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.485 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.485 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:25:59.485 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.485 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:25:59.485 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:59.485 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:59.485 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:59.485 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:59.485 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:59.485 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:59.485 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:59.485 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:59.485 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:59.485 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:59.485 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:25:59.485 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:25:59.485 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:25:59.485 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:25:59.485 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:59.485 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:59.485 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:25:59.485 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:25:59.485 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:25:59.485 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:59.485 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:59.485 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:59.485 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:59.485 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:59.485 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:59.485 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:59.485 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:59.485 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:25:59.485 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:25:59.485 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:25:59.485 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:25:59.485 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:25:59.485 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:25:59.485 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:59.485 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:25:59.485 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:25:59.485 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:25:59.485 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:59.485 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:25:59.485 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:59.485 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:25:59.485 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:59.485 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:25:59.485 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:59.485 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:59.485 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:59.485 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:59.485 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:59.485 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:59.485 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:25:59.485 Cannot find device "nvmf_init_br" 00:25:59.485 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:25:59.485 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:25:59.485 Cannot find device "nvmf_init_br2" 00:25:59.485 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:25:59.485 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:25:59.485 Cannot find device "nvmf_tgt_br" 00:25:59.485 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 00:25:59.485 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:25:59.485 Cannot find device "nvmf_tgt_br2" 00:25:59.485 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 00:25:59.485 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:25:59.485 Cannot find device "nvmf_init_br" 00:25:59.486 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:25:59.486 14:30:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:25:59.486 Cannot find device "nvmf_init_br2" 00:25:59.486 14:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:25:59.486 14:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:25:59.486 Cannot find device "nvmf_tgt_br" 00:25:59.486 14:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 00:25:59.486 14:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:25:59.486 Cannot find device "nvmf_tgt_br2" 00:25:59.486 14:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 00:25:59.486 14:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:25:59.486 Cannot find device "nvmf_br" 00:25:59.486 14:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 00:25:59.486 14:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:25:59.486 Cannot find device "nvmf_init_if" 00:25:59.486 14:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 00:25:59.486 14:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:25:59.486 Cannot find device "nvmf_init_if2" 00:25:59.486 14:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 00:25:59.486 14:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:59.486 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:59.486 14:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 00:25:59.486 14:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:59.486 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:59.486 14:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 00:25:59.486 14:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:25:59.745 14:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:59.745 14:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:25:59.745 14:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:59.745 14:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:59.745 14:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:59.745 14:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:59.745 14:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:59.745 14:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:25:59.745 14:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:25:59.745 14:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:25:59.745 14:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:25:59.745 14:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:25:59.745 14:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:25:59.745 14:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:25:59.745 14:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:25:59.745 14:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:25:59.745 14:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:59.745 14:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:59.745 14:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:59.745 14:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:25:59.745 14:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:25:59.745 14:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:25:59.745 14:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:25:59.745 14:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:59.745 14:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:59.745 14:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:59.745 14:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:25:59.745 14:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:25:59.745 14:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:25:59.745 14:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:59.745 14:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:25:59.745 14:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:25:59.745 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:59.745 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:25:59.745 00:25:59.745 --- 10.0.0.3 ping statistics --- 00:25:59.745 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:59.745 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:25:59.745 14:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:25:59.745 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:25:59.745 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.083 ms 00:25:59.745 00:25:59.745 --- 10.0.0.4 ping statistics --- 00:25:59.745 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:59.745 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:25:59.745 14:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:59.745 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:59.745 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:25:59.745 00:25:59.745 --- 10.0.0.1 ping statistics --- 00:25:59.745 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:59.745 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:25:59.745 14:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:25:59.746 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:59.746 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:25:59.746 00:25:59.746 --- 10.0.0.2 ping statistics --- 00:25:59.746 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:59.746 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:25:59.746 14:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:59.746 14:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@461 -- # return 0 00:25:59.746 14:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:59.746 14:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:59.746 14:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:59.746 14:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:59.746 14:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:59.746 14:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:59.746 14:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:00.005 14:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:26:00.005 14:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:00.005 14:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:00.005 14:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.005 14:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=85241 00:26:00.005 14:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:26:00.005 14:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 85241 00:26:00.005 14:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 85241 ']' 00:26:00.005 14:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:00.005 14:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:00.005 14:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:00.005 14:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:00.005 14:30:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.943 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:00.943 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:26:00.943 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:00.943 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:00.943 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.943 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:00.943 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:26:00.943 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:26:00.943 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:00.943 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:00.943 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:00.943 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:26:00.943 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:26:00.943 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:00.943 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=2ebb5948253e40d7fc34344834dd3ba1 00:26:00.943 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:26:00.943 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.7qm 00:26:00.944 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 2ebb5948253e40d7fc34344834dd3ba1 0 00:26:00.944 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 2ebb5948253e40d7fc34344834dd3ba1 0 00:26:00.944 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:00.944 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:00.944 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=2ebb5948253e40d7fc34344834dd3ba1 00:26:00.944 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:26:00.944 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:00.944 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.7qm 00:26:00.944 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.7qm 00:26:00.944 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.7qm 00:26:00.944 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:26:00.944 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:00.944 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:00.944 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:00.944 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:26:00.944 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:26:00.944 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:00.944 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e24c4a209aba2eae6fcc1da21bd1c4d45034b30cb9235e730522770967803c99 00:26:00.944 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:26:00.944 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.NYu 00:26:00.944 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e24c4a209aba2eae6fcc1da21bd1c4d45034b30cb9235e730522770967803c99 3 00:26:00.944 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e24c4a209aba2eae6fcc1da21bd1c4d45034b30cb9235e730522770967803c99 3 00:26:00.944 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:00.944 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:00.944 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e24c4a209aba2eae6fcc1da21bd1c4d45034b30cb9235e730522770967803c99 00:26:00.944 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:26:00.944 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:00.944 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.NYu 00:26:00.944 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.NYu 00:26:00.944 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.NYu 00:26:00.944 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:26:00.944 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:00.944 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:00.944 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:00.944 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:26:00.944 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:26:00.944 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:00.944 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=2ae2d08488c9accb33324b4f7f5b3159b3d73a2c7b2b5455 00:26:00.944 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:26:01.204 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.iLp 00:26:01.204 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 2ae2d08488c9accb33324b4f7f5b3159b3d73a2c7b2b5455 0 00:26:01.204 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 2ae2d08488c9accb33324b4f7f5b3159b3d73a2c7b2b5455 0 00:26:01.204 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:01.204 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:01.204 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=2ae2d08488c9accb33324b4f7f5b3159b3d73a2c7b2b5455 00:26:01.204 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:26:01.204 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:01.204 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.iLp 00:26:01.204 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.iLp 00:26:01.204 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.iLp 00:26:01.204 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:26:01.204 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:01.204 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:01.204 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:01.204 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:26:01.204 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:26:01.204 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:01.204 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=c86a58e8d933cbd5b72ced5a049ab6e1e076f07c0bbea90b 00:26:01.204 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:26:01.204 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.8IS 00:26:01.204 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key c86a58e8d933cbd5b72ced5a049ab6e1e076f07c0bbea90b 2 00:26:01.204 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 c86a58e8d933cbd5b72ced5a049ab6e1e076f07c0bbea90b 2 00:26:01.204 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:01.204 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:01.204 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=c86a58e8d933cbd5b72ced5a049ab6e1e076f07c0bbea90b 00:26:01.204 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:26:01.204 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:01.204 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.8IS 00:26:01.204 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.8IS 00:26:01.204 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.8IS 00:26:01.204 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:01.204 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:01.204 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:01.204 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:01.204 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:26:01.204 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:26:01.204 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:01.204 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=7b8b8fdbf076dfb16cd331d53f75602b 00:26:01.204 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:26:01.204 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.c7X 00:26:01.204 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 7b8b8fdbf076dfb16cd331d53f75602b 1 00:26:01.204 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 7b8b8fdbf076dfb16cd331d53f75602b 1 00:26:01.204 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:01.204 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:01.204 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=7b8b8fdbf076dfb16cd331d53f75602b 00:26:01.204 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:26:01.204 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:01.204 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.c7X 00:26:01.204 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.c7X 00:26:01.204 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.c7X 00:26:01.204 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:01.204 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:01.204 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:01.204 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:01.204 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:26:01.204 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:26:01.204 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:01.204 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=17bfb9a07d66b3189eb8ea3025b18f68 00:26:01.204 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:26:01.205 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.qcf 00:26:01.205 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 17bfb9a07d66b3189eb8ea3025b18f68 1 00:26:01.205 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 17bfb9a07d66b3189eb8ea3025b18f68 1 00:26:01.205 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:01.205 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:01.205 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=17bfb9a07d66b3189eb8ea3025b18f68 00:26:01.205 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:26:01.205 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:01.464 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.qcf 00:26:01.465 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.qcf 00:26:01.465 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.qcf 00:26:01.465 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:26:01.465 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:01.465 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:01.465 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:01.465 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:26:01.465 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:26:01.465 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:01.465 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=be5f47b781fc87fb10c24ad182176f6b9813ef13ec16d5c0 00:26:01.465 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:26:01.465 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.6JN 00:26:01.465 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key be5f47b781fc87fb10c24ad182176f6b9813ef13ec16d5c0 2 00:26:01.465 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 be5f47b781fc87fb10c24ad182176f6b9813ef13ec16d5c0 2 00:26:01.465 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:01.465 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:01.465 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=be5f47b781fc87fb10c24ad182176f6b9813ef13ec16d5c0 00:26:01.465 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:26:01.465 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:01.465 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.6JN 00:26:01.465 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.6JN 00:26:01.465 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.6JN 00:26:01.465 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:26:01.465 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:01.465 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:01.465 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:01.465 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:26:01.465 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:26:01.465 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:01.465 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=62b485e2956396295404a829ade3a875 00:26:01.465 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:26:01.465 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.l10 00:26:01.465 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 62b485e2956396295404a829ade3a875 0 00:26:01.465 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 62b485e2956396295404a829ade3a875 0 00:26:01.465 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:01.465 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:01.465 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=62b485e2956396295404a829ade3a875 00:26:01.465 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:26:01.465 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:01.465 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.l10 00:26:01.465 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.l10 00:26:01.465 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.l10 00:26:01.465 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:26:01.465 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:01.465 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:01.465 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:01.465 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:26:01.465 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:26:01.465 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:01.465 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=3014cc22394cfa939a10c645dc5656c14474cf1f1aa7ffef4b86f1809c266558 00:26:01.465 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:26:01.465 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.kmK 00:26:01.465 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 3014cc22394cfa939a10c645dc5656c14474cf1f1aa7ffef4b86f1809c266558 3 00:26:01.465 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 3014cc22394cfa939a10c645dc5656c14474cf1f1aa7ffef4b86f1809c266558 3 00:26:01.465 14:30:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:01.465 14:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:01.465 14:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=3014cc22394cfa939a10c645dc5656c14474cf1f1aa7ffef4b86f1809c266558 00:26:01.465 14:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:26:01.465 14:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:01.465 14:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.kmK 00:26:01.465 14:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.kmK 00:26:01.465 14:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.kmK 00:26:01.465 14:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:26:01.465 14:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 85241 00:26:01.465 14:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 85241 ']' 00:26:01.465 14:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:01.465 14:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:01.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:01.465 14:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:01.465 14:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:01.465 14:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.723 14:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:01.723 14:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:26:01.723 14:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:01.723 14:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.7qm 00:26:01.723 14:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.723 14:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.723 14:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.723 14:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.NYu ]] 00:26:01.723 14:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.NYu 00:26:01.723 14:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.723 14:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.723 14:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.723 14:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:01.723 14:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.iLp 00:26:01.723 14:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.723 14:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.723 14:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.723 14:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.8IS ]] 00:26:01.723 14:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.8IS 00:26:01.723 14:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.723 14:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.723 14:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.723 14:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:01.723 14:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.c7X 00:26:01.723 14:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.723 14:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.723 14:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.723 14:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.qcf ]] 00:26:01.723 14:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.qcf 00:26:01.723 14:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.723 14:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.723 14:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.723 14:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:01.723 14:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.6JN 00:26:01.723 14:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.723 14:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.723 14:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.723 14:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.l10 ]] 00:26:01.723 14:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.l10 00:26:01.723 14:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.723 14:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.981 14:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.981 14:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:01.981 14:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.kmK 00:26:01.981 14:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.981 14:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.981 14:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.981 14:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:26:01.981 14:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:26:01.981 14:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:26:01.981 14:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:01.981 14:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:01.981 14:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:01.981 14:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:01.981 14:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:01.981 14:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:01.981 14:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:01.981 14:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:01.981 14:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:01.981 14:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:01.982 14:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:26:01.982 14:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:26:01.982 14:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:26:01.982 14:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:01.982 14:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:01.982 14:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:01.982 14:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:26:01.982 14:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:26:01.982 14:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:26:01.982 14:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:01.982 14:30:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:02.549 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:02.549 Waiting for block devices as requested 00:26:02.549 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:26:02.549 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:26:03.508 14:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:26:03.508 14:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:03.508 14:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:26:03.508 14:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:26:03.508 14:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:03.508 14:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:26:03.508 14:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:26:03.508 14:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:26:03.508 14:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:26:03.508 No valid GPT data, bailing 00:26:03.508 14:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:03.508 14:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:26:03.508 14:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:26:03.508 14:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:26:03.508 14:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:26:03.508 14:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:26:03.508 14:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:26:03.508 14:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:26:03.508 14:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:26:03.508 14:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:26:03.508 14:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:26:03.508 14:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:26:03.508 14:30:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:26:03.508 No valid GPT data, bailing 00:26:03.508 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:26:03.508 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:26:03.508 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:26:03.508 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:26:03.508 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:26:03.508 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:26:03.508 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:26:03.508 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:26:03.508 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:26:03.508 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:26:03.508 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:26:03.508 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:26:03.508 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:26:03.508 No valid GPT data, bailing 00:26:03.508 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:26:03.508 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:26:03.508 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:26:03.508 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:26:03.508 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:26:03.508 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:26:03.508 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:26:03.508 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:26:03.508 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:26:03.508 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:26:03.508 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:26:03.508 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:26:03.508 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:26:03.768 No valid GPT data, bailing 00:26:03.768 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:26:03.768 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:26:03.768 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:26:03.768 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:26:03.768 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:26:03.768 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:03.768 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:03.768 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:03.768 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:26:03.768 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:26:03.768 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:26:03.768 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:26:03.768 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:26:03.768 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:26:03.768 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:26:03.768 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:26:03.768 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:03.768 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid=406d54d0-5e94-472a-a2b3-4291f3ac81e0 -a 10.0.0.1 -t tcp -s 4420 00:26:03.768 00:26:03.768 Discovery Log Number of Records 2, Generation counter 2 00:26:03.768 =====Discovery Log Entry 0====== 00:26:03.768 trtype: tcp 00:26:03.768 adrfam: ipv4 00:26:03.768 subtype: current discovery subsystem 00:26:03.768 treq: not specified, sq flow control disable supported 00:26:03.768 portid: 1 00:26:03.768 trsvcid: 4420 00:26:03.768 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:03.768 traddr: 10.0.0.1 00:26:03.768 eflags: none 00:26:03.768 sectype: none 00:26:03.768 =====Discovery Log Entry 1====== 00:26:03.768 trtype: tcp 00:26:03.768 adrfam: ipv4 00:26:03.768 subtype: nvme subsystem 00:26:03.768 treq: not specified, sq flow control disable supported 00:26:03.768 portid: 1 00:26:03.768 trsvcid: 4420 00:26:03.768 subnqn: nqn.2024-02.io.spdk:cnode0 00:26:03.768 traddr: 10.0.0.1 00:26:03.769 eflags: none 00:26:03.769 sectype: none 00:26:03.769 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:03.769 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:26:03.769 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:03.769 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:03.769 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:03.769 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:03.769 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:03.769 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:03.769 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmFlMmQwODQ4OGM5YWNjYjMzMzI0YjRmN2Y1YjMxNTliM2Q3M2EyYzdiMmI1NDU10+GMbA==: 00:26:03.769 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yzg2YTU4ZThkOTMzY2JkNWI3MmNlZDVhMDQ5YWI2ZTFlMDc2ZjA3YzBiYmVhOTBiT+gSUg==: 00:26:03.769 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:03.769 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:03.769 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmFlMmQwODQ4OGM5YWNjYjMzMzI0YjRmN2Y1YjMxNTliM2Q3M2EyYzdiMmI1NDU10+GMbA==: 00:26:03.769 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yzg2YTU4ZThkOTMzY2JkNWI3MmNlZDVhMDQ5YWI2ZTFlMDc2ZjA3YzBiYmVhOTBiT+gSUg==: ]] 00:26:03.769 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yzg2YTU4ZThkOTMzY2JkNWI3MmNlZDVhMDQ5YWI2ZTFlMDc2ZjA3YzBiYmVhOTBiT+gSUg==: 00:26:03.769 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:03.769 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:26:03.769 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:03.769 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:03.769 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:26:03.769 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:03.769 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:26:03.769 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:03.769 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:03.769 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:03.769 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:03.769 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.769 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.769 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.028 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:04.028 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:04.028 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:04.028 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:04.029 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.029 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.029 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:04.029 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:04.029 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:04.029 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:04.029 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:04.029 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:04.029 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.029 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.029 nvme0n1 00:26:04.029 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.029 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:04.029 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.029 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.029 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:04.029 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.029 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.029 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:04.029 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.029 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.029 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.029 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:04.029 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:04.029 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:04.029 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:26:04.029 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:04.029 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:04.029 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:04.029 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:04.029 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmViYjU5NDgyNTNlNDBkN2ZjMzQzNDQ4MzRkZDNiYTHr+RU7: 00:26:04.029 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTI0YzRhMjA5YWJhMmVhZTZmY2MxZGEyMWJkMWM0ZDQ1MDM0YjMwY2I5MjM1ZTczMDUyMjc3MDk2NzgwM2M5OTBTI5c=: 00:26:04.029 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:04.029 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:04.029 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmViYjU5NDgyNTNlNDBkN2ZjMzQzNDQ4MzRkZDNiYTHr+RU7: 00:26:04.029 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTI0YzRhMjA5YWJhMmVhZTZmY2MxZGEyMWJkMWM0ZDQ1MDM0YjMwY2I5MjM1ZTczMDUyMjc3MDk2NzgwM2M5OTBTI5c=: ]] 00:26:04.029 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTI0YzRhMjA5YWJhMmVhZTZmY2MxZGEyMWJkMWM0ZDQ1MDM0YjMwY2I5MjM1ZTczMDUyMjc3MDk2NzgwM2M5OTBTI5c=: 00:26:04.029 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:26:04.029 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:04.029 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:04.029 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:04.029 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:04.029 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:04.029 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:04.029 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.029 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.029 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.029 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:04.029 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:04.029 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:04.029 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:04.029 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.029 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.029 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:04.029 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:04.029 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:04.029 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:04.029 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:04.029 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:04.029 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.029 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.289 nvme0n1 00:26:04.289 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.289 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:04.289 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.289 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:04.289 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.289 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.289 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.289 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:04.289 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.289 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.289 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.289 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:04.289 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:04.289 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:04.289 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:04.289 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:04.289 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:04.289 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmFlMmQwODQ4OGM5YWNjYjMzMzI0YjRmN2Y1YjMxNTliM2Q3M2EyYzdiMmI1NDU10+GMbA==: 00:26:04.289 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yzg2YTU4ZThkOTMzY2JkNWI3MmNlZDVhMDQ5YWI2ZTFlMDc2ZjA3YzBiYmVhOTBiT+gSUg==: 00:26:04.289 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:04.289 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:04.289 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmFlMmQwODQ4OGM5YWNjYjMzMzI0YjRmN2Y1YjMxNTliM2Q3M2EyYzdiMmI1NDU10+GMbA==: 00:26:04.289 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yzg2YTU4ZThkOTMzY2JkNWI3MmNlZDVhMDQ5YWI2ZTFlMDc2ZjA3YzBiYmVhOTBiT+gSUg==: ]] 00:26:04.289 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yzg2YTU4ZThkOTMzY2JkNWI3MmNlZDVhMDQ5YWI2ZTFlMDc2ZjA3YzBiYmVhOTBiT+gSUg==: 00:26:04.289 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:26:04.289 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:04.289 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:04.289 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:04.289 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:04.289 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:04.289 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:04.289 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.289 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.289 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.289 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:04.289 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:04.289 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:04.289 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:04.289 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.289 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.289 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:04.289 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:04.289 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:04.289 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:04.289 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:04.289 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:04.289 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.289 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.289 nvme0n1 00:26:04.289 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.289 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:04.289 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.289 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:04.289 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.289 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.289 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.289 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:04.289 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.289 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.549 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.549 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:04.549 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:04.549 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:04.549 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:04.549 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:04.549 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:04.549 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2I4YjhmZGJmMDc2ZGZiMTZjZDMzMWQ1M2Y3NTYwMmK56u8u: 00:26:04.549 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTdiZmI5YTA3ZDY2YjMxODllYjhlYTMwMjViMThmNjhJ7IzP: 00:26:04.549 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:04.549 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:04.549 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2I4YjhmZGJmMDc2ZGZiMTZjZDMzMWQ1M2Y3NTYwMmK56u8u: 00:26:04.549 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTdiZmI5YTA3ZDY2YjMxODllYjhlYTMwMjViMThmNjhJ7IzP: ]] 00:26:04.549 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTdiZmI5YTA3ZDY2YjMxODllYjhlYTMwMjViMThmNjhJ7IzP: 00:26:04.549 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:26:04.549 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:04.549 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:04.549 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:04.549 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:04.549 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:04.549 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:04.549 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.549 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.549 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.549 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:04.549 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:04.549 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:04.549 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:04.549 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.549 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.549 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:04.549 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:04.549 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:04.549 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:04.549 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:04.549 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:04.549 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.549 14:30:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.549 nvme0n1 00:26:04.549 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.549 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:04.549 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.549 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.549 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:04.549 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.549 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.549 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:04.549 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.549 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.549 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.549 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:04.549 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:26:04.549 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:04.549 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:04.549 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:04.549 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:04.549 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmU1ZjQ3Yjc4MWZjODdmYjEwYzI0YWQxODIxNzZmNmI5ODEzZWYxM2VjMTZkNWMwDu5arA==: 00:26:04.549 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjJiNDg1ZTI5NTYzOTYyOTU0MDRhODI5YWRlM2E4NzWcjrTQ: 00:26:04.549 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:04.550 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:04.550 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmU1ZjQ3Yjc4MWZjODdmYjEwYzI0YWQxODIxNzZmNmI5ODEzZWYxM2VjMTZkNWMwDu5arA==: 00:26:04.550 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjJiNDg1ZTI5NTYzOTYyOTU0MDRhODI5YWRlM2E4NzWcjrTQ: ]] 00:26:04.550 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjJiNDg1ZTI5NTYzOTYyOTU0MDRhODI5YWRlM2E4NzWcjrTQ: 00:26:04.550 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:26:04.550 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:04.550 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:04.550 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:04.550 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:04.550 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:04.550 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:04.550 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.550 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.550 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.550 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:04.550 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:04.550 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:04.550 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:04.550 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.550 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.550 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:04.550 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:04.550 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:04.550 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:04.550 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:04.550 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:04.550 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.550 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.810 nvme0n1 00:26:04.810 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.810 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:04.810 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:04.810 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.810 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.810 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.810 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.810 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:04.810 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.810 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.810 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.810 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:04.810 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:26:04.810 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:04.810 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:04.810 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:04.810 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:04.810 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzAxNGNjMjIzOTRjZmE5MzlhMTBjNjQ1ZGM1NjU2YzE0NDc0Y2YxZjFhYTdmZmVmNGI4NmYxODA5YzI2NjU1OFRqrsw=: 00:26:04.810 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:04.810 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:04.810 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:04.810 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzAxNGNjMjIzOTRjZmE5MzlhMTBjNjQ1ZGM1NjU2YzE0NDc0Y2YxZjFhYTdmZmVmNGI4NmYxODA5YzI2NjU1OFRqrsw=: 00:26:04.810 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:04.810 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:26:04.810 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:04.810 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:04.810 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:04.810 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:04.810 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:04.810 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:04.810 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.810 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.810 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.810 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:04.810 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:04.810 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:04.810 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:04.810 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.810 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.810 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:04.810 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:04.810 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:04.810 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:04.810 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:04.810 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:04.810 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.810 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.810 nvme0n1 00:26:04.810 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.810 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:04.810 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:04.810 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.810 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.810 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.070 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.070 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:05.070 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.070 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.070 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.070 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:05.070 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:05.070 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:26:05.070 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:05.070 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:05.070 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:05.070 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:05.070 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmViYjU5NDgyNTNlNDBkN2ZjMzQzNDQ4MzRkZDNiYTHr+RU7: 00:26:05.070 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTI0YzRhMjA5YWJhMmVhZTZmY2MxZGEyMWJkMWM0ZDQ1MDM0YjMwY2I5MjM1ZTczMDUyMjc3MDk2NzgwM2M5OTBTI5c=: 00:26:05.070 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:05.070 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:05.070 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmViYjU5NDgyNTNlNDBkN2ZjMzQzNDQ4MzRkZDNiYTHr+RU7: 00:26:05.070 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTI0YzRhMjA5YWJhMmVhZTZmY2MxZGEyMWJkMWM0ZDQ1MDM0YjMwY2I5MjM1ZTczMDUyMjc3MDk2NzgwM2M5OTBTI5c=: ]] 00:26:05.070 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTI0YzRhMjA5YWJhMmVhZTZmY2MxZGEyMWJkMWM0ZDQ1MDM0YjMwY2I5MjM1ZTczMDUyMjc3MDk2NzgwM2M5OTBTI5c=: 00:26:05.070 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:26:05.070 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:05.070 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:05.070 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:05.070 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:05.070 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:05.070 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:05.070 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.070 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.330 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.330 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:05.330 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:05.330 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:05.330 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:05.330 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:05.330 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:05.330 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:05.330 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:05.330 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:05.330 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:05.330 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:05.330 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:05.330 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.330 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.330 nvme0n1 00:26:05.330 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.330 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:05.330 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.330 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.330 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:05.330 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.330 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.330 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:05.330 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.330 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.330 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.330 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:05.330 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:26:05.330 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:05.330 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:05.330 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:05.330 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:05.330 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmFlMmQwODQ4OGM5YWNjYjMzMzI0YjRmN2Y1YjMxNTliM2Q3M2EyYzdiMmI1NDU10+GMbA==: 00:26:05.330 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yzg2YTU4ZThkOTMzY2JkNWI3MmNlZDVhMDQ5YWI2ZTFlMDc2ZjA3YzBiYmVhOTBiT+gSUg==: 00:26:05.330 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:05.330 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:05.330 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmFlMmQwODQ4OGM5YWNjYjMzMzI0YjRmN2Y1YjMxNTliM2Q3M2EyYzdiMmI1NDU10+GMbA==: 00:26:05.330 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yzg2YTU4ZThkOTMzY2JkNWI3MmNlZDVhMDQ5YWI2ZTFlMDc2ZjA3YzBiYmVhOTBiT+gSUg==: ]] 00:26:05.330 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yzg2YTU4ZThkOTMzY2JkNWI3MmNlZDVhMDQ5YWI2ZTFlMDc2ZjA3YzBiYmVhOTBiT+gSUg==: 00:26:05.330 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:26:05.330 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:05.330 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:05.330 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:05.330 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:05.330 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:05.330 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:05.330 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.330 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.330 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.330 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:05.330 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:05.330 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:05.330 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:05.330 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:05.330 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:05.330 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:05.330 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:05.330 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:05.330 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:05.330 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:05.330 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:05.330 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.330 14:30:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.590 nvme0n1 00:26:05.590 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.590 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:05.590 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.590 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.590 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:05.590 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.590 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.590 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:05.590 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.590 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.590 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.590 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:05.590 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:26:05.590 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:05.590 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:05.590 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:05.590 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:05.590 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2I4YjhmZGJmMDc2ZGZiMTZjZDMzMWQ1M2Y3NTYwMmK56u8u: 00:26:05.590 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTdiZmI5YTA3ZDY2YjMxODllYjhlYTMwMjViMThmNjhJ7IzP: 00:26:05.590 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:05.590 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:05.590 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2I4YjhmZGJmMDc2ZGZiMTZjZDMzMWQ1M2Y3NTYwMmK56u8u: 00:26:05.590 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTdiZmI5YTA3ZDY2YjMxODllYjhlYTMwMjViMThmNjhJ7IzP: ]] 00:26:05.590 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTdiZmI5YTA3ZDY2YjMxODllYjhlYTMwMjViMThmNjhJ7IzP: 00:26:05.590 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:26:05.590 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:05.590 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:05.590 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:05.590 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:05.590 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:05.590 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:05.590 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.590 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.590 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.590 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:05.590 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:05.590 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:05.590 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:05.590 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:05.590 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:05.590 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:05.590 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:05.590 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:05.590 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:05.590 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:05.590 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:05.590 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.590 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.849 nvme0n1 00:26:05.849 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.849 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:05.849 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.849 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:05.849 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.849 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.849 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.849 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:05.849 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.849 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.849 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.849 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:05.849 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:26:05.849 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:05.849 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:05.849 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:05.850 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:05.850 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmU1ZjQ3Yjc4MWZjODdmYjEwYzI0YWQxODIxNzZmNmI5ODEzZWYxM2VjMTZkNWMwDu5arA==: 00:26:05.850 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjJiNDg1ZTI5NTYzOTYyOTU0MDRhODI5YWRlM2E4NzWcjrTQ: 00:26:05.850 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:05.850 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:05.850 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmU1ZjQ3Yjc4MWZjODdmYjEwYzI0YWQxODIxNzZmNmI5ODEzZWYxM2VjMTZkNWMwDu5arA==: 00:26:05.850 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjJiNDg1ZTI5NTYzOTYyOTU0MDRhODI5YWRlM2E4NzWcjrTQ: ]] 00:26:05.850 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjJiNDg1ZTI5NTYzOTYyOTU0MDRhODI5YWRlM2E4NzWcjrTQ: 00:26:05.850 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:26:05.850 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:05.850 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:05.850 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:05.850 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:05.850 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:05.850 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:05.850 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.850 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.850 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.850 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:05.850 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:05.850 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:05.850 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:05.850 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:05.850 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:05.850 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:05.850 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:05.850 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:05.850 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:05.850 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:05.850 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:05.850 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.850 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.850 nvme0n1 00:26:05.850 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.850 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:05.850 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:05.850 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.850 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.850 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.850 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.850 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:05.850 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.850 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.110 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.110 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:06.110 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:26:06.110 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:06.110 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:06.110 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:06.110 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:06.110 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzAxNGNjMjIzOTRjZmE5MzlhMTBjNjQ1ZGM1NjU2YzE0NDc0Y2YxZjFhYTdmZmVmNGI4NmYxODA5YzI2NjU1OFRqrsw=: 00:26:06.110 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:06.110 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:06.110 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:06.110 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzAxNGNjMjIzOTRjZmE5MzlhMTBjNjQ1ZGM1NjU2YzE0NDc0Y2YxZjFhYTdmZmVmNGI4NmYxODA5YzI2NjU1OFRqrsw=: 00:26:06.110 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:06.110 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:26:06.110 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:06.110 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:06.110 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:06.110 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:06.110 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:06.110 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:06.110 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.110 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.110 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.110 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:06.110 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:06.110 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:06.110 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:06.110 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:06.110 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:06.110 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:06.110 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:06.110 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:06.110 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:06.110 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:06.110 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:06.110 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.110 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.110 nvme0n1 00:26:06.110 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.110 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.110 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:06.110 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.110 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.110 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.110 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:06.110 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:06.110 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.110 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.110 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.110 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:06.110 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:06.110 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:26:06.110 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:06.110 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:06.110 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:06.110 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:06.110 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmViYjU5NDgyNTNlNDBkN2ZjMzQzNDQ4MzRkZDNiYTHr+RU7: 00:26:06.110 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTI0YzRhMjA5YWJhMmVhZTZmY2MxZGEyMWJkMWM0ZDQ1MDM0YjMwY2I5MjM1ZTczMDUyMjc3MDk2NzgwM2M5OTBTI5c=: 00:26:06.110 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:06.110 14:30:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:06.679 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmViYjU5NDgyNTNlNDBkN2ZjMzQzNDQ4MzRkZDNiYTHr+RU7: 00:26:06.679 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTI0YzRhMjA5YWJhMmVhZTZmY2MxZGEyMWJkMWM0ZDQ1MDM0YjMwY2I5MjM1ZTczMDUyMjc3MDk2NzgwM2M5OTBTI5c=: ]] 00:26:06.679 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTI0YzRhMjA5YWJhMmVhZTZmY2MxZGEyMWJkMWM0ZDQ1MDM0YjMwY2I5MjM1ZTczMDUyMjc3MDk2NzgwM2M5OTBTI5c=: 00:26:06.679 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:26:06.679 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:06.679 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:06.679 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:06.679 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:06.679 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:06.679 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:06.679 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.679 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.679 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.679 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:06.679 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:06.679 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:06.679 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:06.679 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:06.679 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:06.679 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:06.679 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:06.679 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:06.679 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:06.679 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:06.679 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:06.679 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.679 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.938 nvme0n1 00:26:06.938 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.938 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.938 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.938 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.938 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:06.938 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.938 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:06.938 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:06.938 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.938 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.938 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.938 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:06.938 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:26:06.938 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:06.938 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:06.938 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:06.938 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:06.938 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmFlMmQwODQ4OGM5YWNjYjMzMzI0YjRmN2Y1YjMxNTliM2Q3M2EyYzdiMmI1NDU10+GMbA==: 00:26:06.938 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yzg2YTU4ZThkOTMzY2JkNWI3MmNlZDVhMDQ5YWI2ZTFlMDc2ZjA3YzBiYmVhOTBiT+gSUg==: 00:26:06.938 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:06.938 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:06.938 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmFlMmQwODQ4OGM5YWNjYjMzMzI0YjRmN2Y1YjMxNTliM2Q3M2EyYzdiMmI1NDU10+GMbA==: 00:26:06.938 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yzg2YTU4ZThkOTMzY2JkNWI3MmNlZDVhMDQ5YWI2ZTFlMDc2ZjA3YzBiYmVhOTBiT+gSUg==: ]] 00:26:06.938 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yzg2YTU4ZThkOTMzY2JkNWI3MmNlZDVhMDQ5YWI2ZTFlMDc2ZjA3YzBiYmVhOTBiT+gSUg==: 00:26:06.938 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:26:06.938 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:06.938 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:06.938 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:06.938 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:06.938 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:06.938 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:06.938 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.938 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.938 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.938 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:06.938 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:06.938 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:06.938 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:06.938 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:06.938 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:06.938 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:06.938 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:06.938 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:06.938 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:06.938 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:06.938 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:06.938 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.938 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.198 nvme0n1 00:26:07.198 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.198 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:07.198 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.198 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:07.198 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.198 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.198 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:07.198 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:07.198 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.198 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.198 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.198 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:07.198 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:26:07.198 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:07.198 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:07.198 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:07.198 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:07.198 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2I4YjhmZGJmMDc2ZGZiMTZjZDMzMWQ1M2Y3NTYwMmK56u8u: 00:26:07.198 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTdiZmI5YTA3ZDY2YjMxODllYjhlYTMwMjViMThmNjhJ7IzP: 00:26:07.198 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:07.198 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:07.198 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2I4YjhmZGJmMDc2ZGZiMTZjZDMzMWQ1M2Y3NTYwMmK56u8u: 00:26:07.198 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTdiZmI5YTA3ZDY2YjMxODllYjhlYTMwMjViMThmNjhJ7IzP: ]] 00:26:07.198 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTdiZmI5YTA3ZDY2YjMxODllYjhlYTMwMjViMThmNjhJ7IzP: 00:26:07.198 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:26:07.198 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:07.198 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:07.198 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:07.198 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:07.198 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:07.198 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:07.198 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.198 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.198 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.198 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:07.198 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:07.198 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:07.198 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:07.198 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:07.198 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:07.198 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:07.198 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:07.198 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:07.198 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:07.198 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:07.198 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:07.198 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.198 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.458 nvme0n1 00:26:07.458 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.458 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:07.458 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.458 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:07.458 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.458 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.458 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:07.458 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:07.458 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.458 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.458 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.458 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:07.458 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:26:07.458 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:07.458 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:07.458 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:07.458 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:07.458 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmU1ZjQ3Yjc4MWZjODdmYjEwYzI0YWQxODIxNzZmNmI5ODEzZWYxM2VjMTZkNWMwDu5arA==: 00:26:07.458 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjJiNDg1ZTI5NTYzOTYyOTU0MDRhODI5YWRlM2E4NzWcjrTQ: 00:26:07.458 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:07.458 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:07.458 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmU1ZjQ3Yjc4MWZjODdmYjEwYzI0YWQxODIxNzZmNmI5ODEzZWYxM2VjMTZkNWMwDu5arA==: 00:26:07.458 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjJiNDg1ZTI5NTYzOTYyOTU0MDRhODI5YWRlM2E4NzWcjrTQ: ]] 00:26:07.458 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjJiNDg1ZTI5NTYzOTYyOTU0MDRhODI5YWRlM2E4NzWcjrTQ: 00:26:07.458 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:26:07.458 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:07.458 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:07.458 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:07.458 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:07.458 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:07.458 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:07.458 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.458 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.458 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.458 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:07.458 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:07.458 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:07.458 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:07.458 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:07.458 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:07.458 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:07.458 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:07.458 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:07.458 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:07.458 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:07.458 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:07.458 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.458 14:30:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.458 nvme0n1 00:26:07.458 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.458 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:07.458 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.458 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:07.458 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.718 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.718 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:07.718 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:07.718 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.718 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.718 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.718 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:07.718 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:26:07.718 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:07.718 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:07.718 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:07.718 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:07.718 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzAxNGNjMjIzOTRjZmE5MzlhMTBjNjQ1ZGM1NjU2YzE0NDc0Y2YxZjFhYTdmZmVmNGI4NmYxODA5YzI2NjU1OFRqrsw=: 00:26:07.718 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:07.718 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:07.718 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:07.718 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzAxNGNjMjIzOTRjZmE5MzlhMTBjNjQ1ZGM1NjU2YzE0NDc0Y2YxZjFhYTdmZmVmNGI4NmYxODA5YzI2NjU1OFRqrsw=: 00:26:07.718 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:07.718 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:26:07.718 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:07.718 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:07.718 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:07.718 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:07.718 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:07.718 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:07.718 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.718 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.718 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.718 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:07.718 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:07.718 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:07.718 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:07.718 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:07.718 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:07.718 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:07.718 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:07.718 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:07.718 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:07.718 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:07.718 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:07.718 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.718 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.718 nvme0n1 00:26:07.718 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.718 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:07.718 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:07.718 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.718 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.718 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.978 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:07.978 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:07.978 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.978 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.978 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.978 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:07.978 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:07.978 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:26:07.978 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:07.978 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:07.978 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:07.978 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:07.978 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmViYjU5NDgyNTNlNDBkN2ZjMzQzNDQ4MzRkZDNiYTHr+RU7: 00:26:07.978 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTI0YzRhMjA5YWJhMmVhZTZmY2MxZGEyMWJkMWM0ZDQ1MDM0YjMwY2I5MjM1ZTczMDUyMjc3MDk2NzgwM2M5OTBTI5c=: 00:26:07.978 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:07.978 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:09.356 14:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmViYjU5NDgyNTNlNDBkN2ZjMzQzNDQ4MzRkZDNiYTHr+RU7: 00:26:09.356 14:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTI0YzRhMjA5YWJhMmVhZTZmY2MxZGEyMWJkMWM0ZDQ1MDM0YjMwY2I5MjM1ZTczMDUyMjc3MDk2NzgwM2M5OTBTI5c=: ]] 00:26:09.356 14:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTI0YzRhMjA5YWJhMmVhZTZmY2MxZGEyMWJkMWM0ZDQ1MDM0YjMwY2I5MjM1ZTczMDUyMjc3MDk2NzgwM2M5OTBTI5c=: 00:26:09.356 14:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:26:09.356 14:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:09.356 14:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:09.356 14:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:09.356 14:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:09.356 14:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:09.356 14:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:09.356 14:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.356 14:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.356 14:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.356 14:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:09.356 14:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:09.356 14:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:09.356 14:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:09.356 14:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:09.356 14:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:09.356 14:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:09.356 14:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:09.356 14:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:09.356 14:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:09.356 14:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:09.356 14:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:09.356 14:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.356 14:30:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.616 nvme0n1 00:26:09.616 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.616 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:09.616 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:09.616 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.616 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.616 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.616 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:09.616 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:09.616 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.616 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.616 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.616 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:09.616 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:26:09.616 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:09.616 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:09.616 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:09.616 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:09.616 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmFlMmQwODQ4OGM5YWNjYjMzMzI0YjRmN2Y1YjMxNTliM2Q3M2EyYzdiMmI1NDU10+GMbA==: 00:26:09.616 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yzg2YTU4ZThkOTMzY2JkNWI3MmNlZDVhMDQ5YWI2ZTFlMDc2ZjA3YzBiYmVhOTBiT+gSUg==: 00:26:09.616 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:09.616 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:09.616 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmFlMmQwODQ4OGM5YWNjYjMzMzI0YjRmN2Y1YjMxNTliM2Q3M2EyYzdiMmI1NDU10+GMbA==: 00:26:09.616 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yzg2YTU4ZThkOTMzY2JkNWI3MmNlZDVhMDQ5YWI2ZTFlMDc2ZjA3YzBiYmVhOTBiT+gSUg==: ]] 00:26:09.616 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yzg2YTU4ZThkOTMzY2JkNWI3MmNlZDVhMDQ5YWI2ZTFlMDc2ZjA3YzBiYmVhOTBiT+gSUg==: 00:26:09.616 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:26:09.616 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:09.616 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:09.616 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:09.616 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:09.616 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:09.616 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:09.616 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.616 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.616 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.616 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:09.616 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:09.616 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:09.616 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:09.616 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:09.616 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:09.616 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:09.616 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:09.616 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:09.616 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:09.616 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:09.616 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:09.616 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.616 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.876 nvme0n1 00:26:09.876 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.876 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:09.876 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.876 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.876 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:09.876 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.876 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:09.876 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:09.876 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.876 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.876 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.876 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:09.876 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:26:09.876 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:09.876 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:09.876 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:09.876 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:09.876 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2I4YjhmZGJmMDc2ZGZiMTZjZDMzMWQ1M2Y3NTYwMmK56u8u: 00:26:09.876 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTdiZmI5YTA3ZDY2YjMxODllYjhlYTMwMjViMThmNjhJ7IzP: 00:26:09.876 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:09.876 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:09.876 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2I4YjhmZGJmMDc2ZGZiMTZjZDMzMWQ1M2Y3NTYwMmK56u8u: 00:26:09.876 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTdiZmI5YTA3ZDY2YjMxODllYjhlYTMwMjViMThmNjhJ7IzP: ]] 00:26:09.876 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTdiZmI5YTA3ZDY2YjMxODllYjhlYTMwMjViMThmNjhJ7IzP: 00:26:09.876 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:26:09.876 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:09.876 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:09.876 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:09.876 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:09.876 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:09.876 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:09.876 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.876 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.876 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.876 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:09.876 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:09.876 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:09.876 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:09.876 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:09.876 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:09.876 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:09.876 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:09.876 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:09.876 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:09.876 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:09.876 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:09.876 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.876 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.444 nvme0n1 00:26:10.444 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.445 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:10.445 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:10.445 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.445 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.445 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.445 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:10.445 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:10.445 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.445 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.445 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.445 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:10.445 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:26:10.445 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:10.445 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:10.445 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:10.445 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:10.445 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmU1ZjQ3Yjc4MWZjODdmYjEwYzI0YWQxODIxNzZmNmI5ODEzZWYxM2VjMTZkNWMwDu5arA==: 00:26:10.445 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjJiNDg1ZTI5NTYzOTYyOTU0MDRhODI5YWRlM2E4NzWcjrTQ: 00:26:10.445 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:10.445 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:10.445 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmU1ZjQ3Yjc4MWZjODdmYjEwYzI0YWQxODIxNzZmNmI5ODEzZWYxM2VjMTZkNWMwDu5arA==: 00:26:10.445 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjJiNDg1ZTI5NTYzOTYyOTU0MDRhODI5YWRlM2E4NzWcjrTQ: ]] 00:26:10.445 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjJiNDg1ZTI5NTYzOTYyOTU0MDRhODI5YWRlM2E4NzWcjrTQ: 00:26:10.445 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:26:10.445 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:10.445 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:10.445 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:10.445 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:10.445 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:10.445 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:10.445 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.445 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.445 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.445 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:10.445 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:10.445 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:10.445 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:10.445 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:10.445 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:10.445 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:10.445 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:10.445 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:10.445 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:10.445 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:10.445 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:10.445 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.445 14:30:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.704 nvme0n1 00:26:10.704 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.704 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:10.704 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.704 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.704 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:10.704 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.704 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:10.704 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:10.704 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.704 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.704 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.704 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:10.704 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:26:10.704 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:10.704 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:10.704 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:10.704 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:10.704 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzAxNGNjMjIzOTRjZmE5MzlhMTBjNjQ1ZGM1NjU2YzE0NDc0Y2YxZjFhYTdmZmVmNGI4NmYxODA5YzI2NjU1OFRqrsw=: 00:26:10.704 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:10.704 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:10.704 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:10.704 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzAxNGNjMjIzOTRjZmE5MzlhMTBjNjQ1ZGM1NjU2YzE0NDc0Y2YxZjFhYTdmZmVmNGI4NmYxODA5YzI2NjU1OFRqrsw=: 00:26:10.704 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:10.704 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:26:10.704 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:10.704 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:10.704 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:10.704 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:10.704 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:10.704 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:10.704 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.704 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.704 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.704 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:10.704 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:10.704 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:10.704 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:10.704 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:10.704 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:10.704 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:10.704 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:10.704 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:10.704 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:10.704 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:10.704 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:10.704 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.704 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.272 nvme0n1 00:26:11.272 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.272 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:11.272 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.272 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:11.272 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.272 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.272 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:11.272 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:11.272 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.272 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.272 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.272 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:11.273 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:11.273 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:26:11.273 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:11.273 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:11.273 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:11.273 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:11.273 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmViYjU5NDgyNTNlNDBkN2ZjMzQzNDQ4MzRkZDNiYTHr+RU7: 00:26:11.273 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTI0YzRhMjA5YWJhMmVhZTZmY2MxZGEyMWJkMWM0ZDQ1MDM0YjMwY2I5MjM1ZTczMDUyMjc3MDk2NzgwM2M5OTBTI5c=: 00:26:11.273 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:11.273 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:11.273 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmViYjU5NDgyNTNlNDBkN2ZjMzQzNDQ4MzRkZDNiYTHr+RU7: 00:26:11.273 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTI0YzRhMjA5YWJhMmVhZTZmY2MxZGEyMWJkMWM0ZDQ1MDM0YjMwY2I5MjM1ZTczMDUyMjc3MDk2NzgwM2M5OTBTI5c=: ]] 00:26:11.273 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTI0YzRhMjA5YWJhMmVhZTZmY2MxZGEyMWJkMWM0ZDQ1MDM0YjMwY2I5MjM1ZTczMDUyMjc3MDk2NzgwM2M5OTBTI5c=: 00:26:11.273 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:26:11.273 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:11.273 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:11.273 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:11.273 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:11.273 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:11.273 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:11.273 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.273 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.273 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.273 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:11.273 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:11.273 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:11.273 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:11.273 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:11.273 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:11.273 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:11.273 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:11.273 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:11.273 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:11.273 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:11.273 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:11.273 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.273 14:30:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.841 nvme0n1 00:26:11.841 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.841 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:11.841 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:11.841 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.841 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.841 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.841 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:11.841 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:11.841 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.841 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.841 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.841 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:11.841 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:26:11.841 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:11.841 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:11.841 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:11.841 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:11.841 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmFlMmQwODQ4OGM5YWNjYjMzMzI0YjRmN2Y1YjMxNTliM2Q3M2EyYzdiMmI1NDU10+GMbA==: 00:26:11.841 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yzg2YTU4ZThkOTMzY2JkNWI3MmNlZDVhMDQ5YWI2ZTFlMDc2ZjA3YzBiYmVhOTBiT+gSUg==: 00:26:11.841 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:11.841 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:11.841 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmFlMmQwODQ4OGM5YWNjYjMzMzI0YjRmN2Y1YjMxNTliM2Q3M2EyYzdiMmI1NDU10+GMbA==: 00:26:11.841 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yzg2YTU4ZThkOTMzY2JkNWI3MmNlZDVhMDQ5YWI2ZTFlMDc2ZjA3YzBiYmVhOTBiT+gSUg==: ]] 00:26:11.841 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yzg2YTU4ZThkOTMzY2JkNWI3MmNlZDVhMDQ5YWI2ZTFlMDc2ZjA3YzBiYmVhOTBiT+gSUg==: 00:26:11.842 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:26:11.842 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:11.842 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:11.842 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:11.842 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:11.842 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:11.842 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:11.842 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.842 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.842 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.842 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:11.842 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:11.842 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:11.842 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:11.842 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:11.842 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:11.842 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:11.842 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:11.842 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:11.842 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:11.842 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:11.842 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:11.842 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.842 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.417 nvme0n1 00:26:12.417 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.417 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:12.417 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:12.417 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.417 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.417 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.417 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:12.417 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:12.417 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.417 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.417 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.417 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:12.417 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:26:12.417 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:12.417 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:12.418 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:12.418 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:12.418 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2I4YjhmZGJmMDc2ZGZiMTZjZDMzMWQ1M2Y3NTYwMmK56u8u: 00:26:12.418 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTdiZmI5YTA3ZDY2YjMxODllYjhlYTMwMjViMThmNjhJ7IzP: 00:26:12.418 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:12.418 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:12.418 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2I4YjhmZGJmMDc2ZGZiMTZjZDMzMWQ1M2Y3NTYwMmK56u8u: 00:26:12.418 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTdiZmI5YTA3ZDY2YjMxODllYjhlYTMwMjViMThmNjhJ7IzP: ]] 00:26:12.418 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTdiZmI5YTA3ZDY2YjMxODllYjhlYTMwMjViMThmNjhJ7IzP: 00:26:12.418 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:26:12.418 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:12.418 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:12.418 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:12.418 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:12.418 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:12.418 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:12.418 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.418 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.418 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.418 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:12.418 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:12.418 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:12.418 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:12.418 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:12.418 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:12.418 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:12.418 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:12.418 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:12.418 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:12.418 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:12.418 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:12.418 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.418 14:30:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.988 nvme0n1 00:26:12.988 14:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.988 14:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:12.988 14:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.988 14:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:12.988 14:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.988 14:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.988 14:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:12.988 14:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:12.988 14:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.988 14:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.988 14:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.988 14:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:12.988 14:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:26:12.988 14:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:12.988 14:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:12.988 14:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:12.988 14:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:12.988 14:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmU1ZjQ3Yjc4MWZjODdmYjEwYzI0YWQxODIxNzZmNmI5ODEzZWYxM2VjMTZkNWMwDu5arA==: 00:26:12.988 14:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjJiNDg1ZTI5NTYzOTYyOTU0MDRhODI5YWRlM2E4NzWcjrTQ: 00:26:12.988 14:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:12.988 14:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:12.989 14:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmU1ZjQ3Yjc4MWZjODdmYjEwYzI0YWQxODIxNzZmNmI5ODEzZWYxM2VjMTZkNWMwDu5arA==: 00:26:12.989 14:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjJiNDg1ZTI5NTYzOTYyOTU0MDRhODI5YWRlM2E4NzWcjrTQ: ]] 00:26:12.989 14:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjJiNDg1ZTI5NTYzOTYyOTU0MDRhODI5YWRlM2E4NzWcjrTQ: 00:26:12.989 14:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:26:12.989 14:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:12.989 14:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:12.989 14:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:12.989 14:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:12.989 14:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:12.989 14:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:12.989 14:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.989 14:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.989 14:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.989 14:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:12.989 14:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:12.989 14:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:12.989 14:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:12.989 14:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:12.989 14:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:12.989 14:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:12.989 14:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:12.989 14:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:12.989 14:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:12.989 14:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:12.989 14:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:12.989 14:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.989 14:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.557 nvme0n1 00:26:13.557 14:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.557 14:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:13.557 14:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.557 14:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:13.557 14:30:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.557 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.557 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:13.557 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:13.557 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.557 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.557 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.557 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:13.557 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:26:13.557 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:13.557 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:13.557 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:13.557 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:13.557 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzAxNGNjMjIzOTRjZmE5MzlhMTBjNjQ1ZGM1NjU2YzE0NDc0Y2YxZjFhYTdmZmVmNGI4NmYxODA5YzI2NjU1OFRqrsw=: 00:26:13.557 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:13.557 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:13.557 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:13.557 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzAxNGNjMjIzOTRjZmE5MzlhMTBjNjQ1ZGM1NjU2YzE0NDc0Y2YxZjFhYTdmZmVmNGI4NmYxODA5YzI2NjU1OFRqrsw=: 00:26:13.557 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:13.557 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:26:13.557 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:13.557 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:13.557 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:13.557 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:13.557 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:13.557 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:13.557 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.557 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.557 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.557 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:13.557 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:13.557 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:13.557 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:13.557 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:13.557 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:13.557 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:13.557 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:13.557 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:13.557 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:13.557 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:13.557 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:13.557 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.557 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.124 nvme0n1 00:26:14.124 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.124 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.124 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.124 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.124 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:14.124 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.124 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.124 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:14.124 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.124 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.124 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.124 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:14.124 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:14.124 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:14.124 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:26:14.124 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:14.124 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:14.124 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:14.124 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:14.124 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmViYjU5NDgyNTNlNDBkN2ZjMzQzNDQ4MzRkZDNiYTHr+RU7: 00:26:14.124 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTI0YzRhMjA5YWJhMmVhZTZmY2MxZGEyMWJkMWM0ZDQ1MDM0YjMwY2I5MjM1ZTczMDUyMjc3MDk2NzgwM2M5OTBTI5c=: 00:26:14.124 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:14.124 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:14.124 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmViYjU5NDgyNTNlNDBkN2ZjMzQzNDQ4MzRkZDNiYTHr+RU7: 00:26:14.124 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTI0YzRhMjA5YWJhMmVhZTZmY2MxZGEyMWJkMWM0ZDQ1MDM0YjMwY2I5MjM1ZTczMDUyMjc3MDk2NzgwM2M5OTBTI5c=: ]] 00:26:14.124 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTI0YzRhMjA5YWJhMmVhZTZmY2MxZGEyMWJkMWM0ZDQ1MDM0YjMwY2I5MjM1ZTczMDUyMjc3MDk2NzgwM2M5OTBTI5c=: 00:26:14.124 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:26:14.124 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:14.124 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:14.124 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:14.124 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:14.125 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:14.125 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:14.125 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.125 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.125 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.125 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:14.125 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:14.125 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:14.125 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:14.125 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.125 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.125 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:14.125 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:14.125 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:14.125 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:14.125 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:14.125 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:14.125 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.125 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.384 nvme0n1 00:26:14.384 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.384 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.384 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:14.384 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.384 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.384 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.384 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.384 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:14.384 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.384 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.384 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.384 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:14.384 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:26:14.384 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:14.384 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:14.384 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:14.384 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:14.384 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmFlMmQwODQ4OGM5YWNjYjMzMzI0YjRmN2Y1YjMxNTliM2Q3M2EyYzdiMmI1NDU10+GMbA==: 00:26:14.384 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yzg2YTU4ZThkOTMzY2JkNWI3MmNlZDVhMDQ5YWI2ZTFlMDc2ZjA3YzBiYmVhOTBiT+gSUg==: 00:26:14.384 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:14.384 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:14.384 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmFlMmQwODQ4OGM5YWNjYjMzMzI0YjRmN2Y1YjMxNTliM2Q3M2EyYzdiMmI1NDU10+GMbA==: 00:26:14.384 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yzg2YTU4ZThkOTMzY2JkNWI3MmNlZDVhMDQ5YWI2ZTFlMDc2ZjA3YzBiYmVhOTBiT+gSUg==: ]] 00:26:14.384 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yzg2YTU4ZThkOTMzY2JkNWI3MmNlZDVhMDQ5YWI2ZTFlMDc2ZjA3YzBiYmVhOTBiT+gSUg==: 00:26:14.384 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:26:14.385 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:14.385 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:14.385 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:14.385 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:14.385 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:14.385 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:14.385 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.385 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.385 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.385 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:14.385 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:14.385 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:14.385 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:14.385 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.385 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.385 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:14.385 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:14.385 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:14.385 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:14.385 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:14.385 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:14.385 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.385 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.385 nvme0n1 00:26:14.385 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.385 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.385 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:14.385 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.385 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.385 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.385 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.385 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:14.385 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.385 14:30:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.385 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.385 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:14.385 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:26:14.385 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:14.385 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:14.385 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:14.385 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:14.385 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2I4YjhmZGJmMDc2ZGZiMTZjZDMzMWQ1M2Y3NTYwMmK56u8u: 00:26:14.385 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTdiZmI5YTA3ZDY2YjMxODllYjhlYTMwMjViMThmNjhJ7IzP: 00:26:14.385 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:14.385 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:14.385 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2I4YjhmZGJmMDc2ZGZiMTZjZDMzMWQ1M2Y3NTYwMmK56u8u: 00:26:14.385 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTdiZmI5YTA3ZDY2YjMxODllYjhlYTMwMjViMThmNjhJ7IzP: ]] 00:26:14.385 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTdiZmI5YTA3ZDY2YjMxODllYjhlYTMwMjViMThmNjhJ7IzP: 00:26:14.385 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:26:14.385 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:14.385 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:14.385 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:14.385 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:14.385 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:14.385 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:14.385 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.385 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.644 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.644 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:14.644 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:14.644 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:14.644 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:14.644 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.644 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.644 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:14.644 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:14.644 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:14.644 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:14.644 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:14.644 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:14.644 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.644 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.644 nvme0n1 00:26:14.644 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.644 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.644 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.644 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.644 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:14.644 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.644 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.644 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:14.644 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.644 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.644 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.644 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:14.644 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:26:14.644 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:14.644 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:14.644 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:14.644 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:14.644 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmU1ZjQ3Yjc4MWZjODdmYjEwYzI0YWQxODIxNzZmNmI5ODEzZWYxM2VjMTZkNWMwDu5arA==: 00:26:14.644 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjJiNDg1ZTI5NTYzOTYyOTU0MDRhODI5YWRlM2E4NzWcjrTQ: 00:26:14.644 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:14.644 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:14.645 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmU1ZjQ3Yjc4MWZjODdmYjEwYzI0YWQxODIxNzZmNmI5ODEzZWYxM2VjMTZkNWMwDu5arA==: 00:26:14.645 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjJiNDg1ZTI5NTYzOTYyOTU0MDRhODI5YWRlM2E4NzWcjrTQ: ]] 00:26:14.645 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjJiNDg1ZTI5NTYzOTYyOTU0MDRhODI5YWRlM2E4NzWcjrTQ: 00:26:14.645 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:26:14.645 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:14.645 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:14.645 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:14.645 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:14.645 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:14.645 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:14.645 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.645 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.645 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.645 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:14.645 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:14.645 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:14.645 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:14.645 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.645 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.645 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:14.645 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:14.645 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:14.645 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:14.645 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:14.645 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:14.645 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.645 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.903 nvme0n1 00:26:14.903 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.903 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.903 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:14.903 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.903 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.903 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.903 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.903 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:14.903 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.903 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.903 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.903 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:14.903 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:26:14.903 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:14.903 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:14.903 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:14.903 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:14.903 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzAxNGNjMjIzOTRjZmE5MzlhMTBjNjQ1ZGM1NjU2YzE0NDc0Y2YxZjFhYTdmZmVmNGI4NmYxODA5YzI2NjU1OFRqrsw=: 00:26:14.903 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:14.903 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:14.903 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:14.903 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzAxNGNjMjIzOTRjZmE5MzlhMTBjNjQ1ZGM1NjU2YzE0NDc0Y2YxZjFhYTdmZmVmNGI4NmYxODA5YzI2NjU1OFRqrsw=: 00:26:14.903 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:14.903 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:26:14.903 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:14.903 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:14.903 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:14.903 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:14.903 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:14.903 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:14.903 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.903 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.903 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.903 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:14.903 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:14.903 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:14.903 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:14.904 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.904 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.904 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:14.904 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:14.904 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:14.904 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:14.904 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:14.904 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:14.904 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.904 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.904 nvme0n1 00:26:14.904 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.904 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.904 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.904 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:14.904 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.904 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.161 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.161 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:15.161 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.161 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.161 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.161 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:15.161 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:15.161 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:26:15.161 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:15.161 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:15.161 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:15.161 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:15.161 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmViYjU5NDgyNTNlNDBkN2ZjMzQzNDQ4MzRkZDNiYTHr+RU7: 00:26:15.161 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTI0YzRhMjA5YWJhMmVhZTZmY2MxZGEyMWJkMWM0ZDQ1MDM0YjMwY2I5MjM1ZTczMDUyMjc3MDk2NzgwM2M5OTBTI5c=: 00:26:15.161 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:15.161 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:15.161 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmViYjU5NDgyNTNlNDBkN2ZjMzQzNDQ4MzRkZDNiYTHr+RU7: 00:26:15.161 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTI0YzRhMjA5YWJhMmVhZTZmY2MxZGEyMWJkMWM0ZDQ1MDM0YjMwY2I5MjM1ZTczMDUyMjc3MDk2NzgwM2M5OTBTI5c=: ]] 00:26:15.161 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTI0YzRhMjA5YWJhMmVhZTZmY2MxZGEyMWJkMWM0ZDQ1MDM0YjMwY2I5MjM1ZTczMDUyMjc3MDk2NzgwM2M5OTBTI5c=: 00:26:15.161 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:26:15.161 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:15.161 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:15.161 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:15.161 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:15.161 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:15.161 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:15.161 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.161 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.161 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.161 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:15.161 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:15.161 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:15.161 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:15.161 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:15.161 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:15.161 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:15.161 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:15.161 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:15.161 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:15.161 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:15.161 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:15.161 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.161 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.161 nvme0n1 00:26:15.161 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.161 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:15.161 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.161 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:15.161 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.161 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.161 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.161 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:15.161 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.161 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.161 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.161 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:15.161 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:26:15.161 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:15.161 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:15.161 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:15.161 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:15.161 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmFlMmQwODQ4OGM5YWNjYjMzMzI0YjRmN2Y1YjMxNTliM2Q3M2EyYzdiMmI1NDU10+GMbA==: 00:26:15.161 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yzg2YTU4ZThkOTMzY2JkNWI3MmNlZDVhMDQ5YWI2ZTFlMDc2ZjA3YzBiYmVhOTBiT+gSUg==: 00:26:15.161 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:15.161 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:15.161 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmFlMmQwODQ4OGM5YWNjYjMzMzI0YjRmN2Y1YjMxNTliM2Q3M2EyYzdiMmI1NDU10+GMbA==: 00:26:15.161 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yzg2YTU4ZThkOTMzY2JkNWI3MmNlZDVhMDQ5YWI2ZTFlMDc2ZjA3YzBiYmVhOTBiT+gSUg==: ]] 00:26:15.161 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yzg2YTU4ZThkOTMzY2JkNWI3MmNlZDVhMDQ5YWI2ZTFlMDc2ZjA3YzBiYmVhOTBiT+gSUg==: 00:26:15.161 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:26:15.161 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:15.161 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:15.161 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:15.161 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:15.161 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:15.161 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:15.161 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.161 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.161 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.161 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:15.161 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:15.161 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:15.161 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:15.161 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:15.161 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:15.161 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:15.161 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:15.161 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:15.161 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:15.161 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:15.161 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:15.161 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.161 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.420 nvme0n1 00:26:15.420 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.420 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:15.420 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.420 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:15.420 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.420 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.420 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.420 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:15.420 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.420 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.420 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.420 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:15.420 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:26:15.420 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:15.420 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:15.420 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:15.420 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:15.420 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2I4YjhmZGJmMDc2ZGZiMTZjZDMzMWQ1M2Y3NTYwMmK56u8u: 00:26:15.420 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTdiZmI5YTA3ZDY2YjMxODllYjhlYTMwMjViMThmNjhJ7IzP: 00:26:15.420 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:15.420 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:15.420 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2I4YjhmZGJmMDc2ZGZiMTZjZDMzMWQ1M2Y3NTYwMmK56u8u: 00:26:15.420 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTdiZmI5YTA3ZDY2YjMxODllYjhlYTMwMjViMThmNjhJ7IzP: ]] 00:26:15.420 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTdiZmI5YTA3ZDY2YjMxODllYjhlYTMwMjViMThmNjhJ7IzP: 00:26:15.420 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:26:15.420 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:15.420 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:15.420 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:15.420 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:15.420 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:15.420 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:15.420 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.420 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.420 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.420 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:15.420 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:15.420 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:15.420 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:15.420 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:15.420 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:15.420 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:15.420 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:15.420 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:15.420 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:15.420 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:15.420 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:15.420 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.420 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.679 nvme0n1 00:26:15.679 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.679 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:15.679 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:15.679 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.679 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.679 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.679 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.679 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:15.679 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.679 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.679 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.679 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:15.679 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:26:15.679 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:15.679 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:15.679 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:15.679 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:15.679 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmU1ZjQ3Yjc4MWZjODdmYjEwYzI0YWQxODIxNzZmNmI5ODEzZWYxM2VjMTZkNWMwDu5arA==: 00:26:15.679 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjJiNDg1ZTI5NTYzOTYyOTU0MDRhODI5YWRlM2E4NzWcjrTQ: 00:26:15.679 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:15.679 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:15.679 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmU1ZjQ3Yjc4MWZjODdmYjEwYzI0YWQxODIxNzZmNmI5ODEzZWYxM2VjMTZkNWMwDu5arA==: 00:26:15.679 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjJiNDg1ZTI5NTYzOTYyOTU0MDRhODI5YWRlM2E4NzWcjrTQ: ]] 00:26:15.679 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjJiNDg1ZTI5NTYzOTYyOTU0MDRhODI5YWRlM2E4NzWcjrTQ: 00:26:15.679 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:26:15.679 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:15.679 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:15.679 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:15.679 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:15.679 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:15.679 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:15.679 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.679 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.679 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.679 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:15.679 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:15.679 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:15.679 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:15.679 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:15.679 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:15.679 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:15.679 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:15.679 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:15.679 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:15.679 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:15.679 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:15.679 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.679 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.679 nvme0n1 00:26:15.679 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.679 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:15.679 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.679 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:15.679 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.940 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.940 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.940 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:15.940 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.940 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.940 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.940 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:15.940 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:26:15.940 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:15.940 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:15.940 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:15.940 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:15.940 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzAxNGNjMjIzOTRjZmE5MzlhMTBjNjQ1ZGM1NjU2YzE0NDc0Y2YxZjFhYTdmZmVmNGI4NmYxODA5YzI2NjU1OFRqrsw=: 00:26:15.940 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:15.940 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:15.940 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:15.940 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzAxNGNjMjIzOTRjZmE5MzlhMTBjNjQ1ZGM1NjU2YzE0NDc0Y2YxZjFhYTdmZmVmNGI4NmYxODA5YzI2NjU1OFRqrsw=: 00:26:15.940 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:15.940 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:26:15.940 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:15.940 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:15.940 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:15.940 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:15.940 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:15.940 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:15.940 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.940 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.940 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.940 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:15.940 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:15.940 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:15.940 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:15.940 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:15.940 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:15.940 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:15.940 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:15.940 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:15.940 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:15.940 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:15.940 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:15.940 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.940 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.940 nvme0n1 00:26:15.940 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.940 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:15.940 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:15.940 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.940 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.940 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.940 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.940 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:15.940 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.940 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.940 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.940 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:15.940 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:15.940 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:26:15.940 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:15.940 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:15.940 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:15.940 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:15.940 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmViYjU5NDgyNTNlNDBkN2ZjMzQzNDQ4MzRkZDNiYTHr+RU7: 00:26:15.940 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTI0YzRhMjA5YWJhMmVhZTZmY2MxZGEyMWJkMWM0ZDQ1MDM0YjMwY2I5MjM1ZTczMDUyMjc3MDk2NzgwM2M5OTBTI5c=: 00:26:15.940 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:15.940 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:15.940 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmViYjU5NDgyNTNlNDBkN2ZjMzQzNDQ4MzRkZDNiYTHr+RU7: 00:26:15.940 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTI0YzRhMjA5YWJhMmVhZTZmY2MxZGEyMWJkMWM0ZDQ1MDM0YjMwY2I5MjM1ZTczMDUyMjc3MDk2NzgwM2M5OTBTI5c=: ]] 00:26:15.940 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTI0YzRhMjA5YWJhMmVhZTZmY2MxZGEyMWJkMWM0ZDQ1MDM0YjMwY2I5MjM1ZTczMDUyMjc3MDk2NzgwM2M5OTBTI5c=: 00:26:15.940 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:26:15.940 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:15.940 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:15.940 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:15.940 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:15.940 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:15.940 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:15.940 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.940 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.200 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.200 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:16.200 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:16.200 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:16.200 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:16.200 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:16.200 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:16.200 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:16.200 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:16.200 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:16.200 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:16.200 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:16.200 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:16.200 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.200 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.200 nvme0n1 00:26:16.200 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.200 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:16.200 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.200 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:16.200 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.200 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.200 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:16.200 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:16.200 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.200 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.200 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.200 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:16.200 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:26:16.200 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:16.200 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:16.200 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:16.200 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:16.200 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmFlMmQwODQ4OGM5YWNjYjMzMzI0YjRmN2Y1YjMxNTliM2Q3M2EyYzdiMmI1NDU10+GMbA==: 00:26:16.200 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yzg2YTU4ZThkOTMzY2JkNWI3MmNlZDVhMDQ5YWI2ZTFlMDc2ZjA3YzBiYmVhOTBiT+gSUg==: 00:26:16.200 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:16.200 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:16.200 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmFlMmQwODQ4OGM5YWNjYjMzMzI0YjRmN2Y1YjMxNTliM2Q3M2EyYzdiMmI1NDU10+GMbA==: 00:26:16.200 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yzg2YTU4ZThkOTMzY2JkNWI3MmNlZDVhMDQ5YWI2ZTFlMDc2ZjA3YzBiYmVhOTBiT+gSUg==: ]] 00:26:16.200 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yzg2YTU4ZThkOTMzY2JkNWI3MmNlZDVhMDQ5YWI2ZTFlMDc2ZjA3YzBiYmVhOTBiT+gSUg==: 00:26:16.200 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:26:16.200 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:16.200 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:16.200 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:16.200 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:16.200 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:16.200 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:16.200 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.200 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.200 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.460 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:16.460 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:16.460 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:16.460 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:16.460 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:16.460 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:16.460 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:16.460 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:16.460 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:16.460 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:16.460 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:16.460 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:16.460 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.460 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.460 nvme0n1 00:26:16.460 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.460 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:16.460 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.460 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:16.460 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.460 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.460 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:16.460 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:16.460 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.460 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.460 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.460 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:16.460 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:26:16.460 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:16.460 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:16.460 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:16.460 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:16.460 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2I4YjhmZGJmMDc2ZGZiMTZjZDMzMWQ1M2Y3NTYwMmK56u8u: 00:26:16.460 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTdiZmI5YTA3ZDY2YjMxODllYjhlYTMwMjViMThmNjhJ7IzP: 00:26:16.460 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:16.460 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:16.460 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2I4YjhmZGJmMDc2ZGZiMTZjZDMzMWQ1M2Y3NTYwMmK56u8u: 00:26:16.460 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTdiZmI5YTA3ZDY2YjMxODllYjhlYTMwMjViMThmNjhJ7IzP: ]] 00:26:16.460 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTdiZmI5YTA3ZDY2YjMxODllYjhlYTMwMjViMThmNjhJ7IzP: 00:26:16.460 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:26:16.460 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:16.460 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:16.460 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:16.460 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:16.460 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:16.460 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:16.460 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.460 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.460 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.460 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:16.460 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:16.460 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:16.460 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:16.460 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:16.460 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:16.460 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:16.460 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:16.460 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:16.460 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:16.460 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:16.460 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:16.460 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.460 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.720 nvme0n1 00:26:16.720 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.720 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:16.720 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.720 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:16.720 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.720 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.720 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:16.720 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:16.720 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.720 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.720 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.720 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:16.720 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:26:16.720 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:16.720 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:16.720 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:16.720 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:16.720 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmU1ZjQ3Yjc4MWZjODdmYjEwYzI0YWQxODIxNzZmNmI5ODEzZWYxM2VjMTZkNWMwDu5arA==: 00:26:16.721 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjJiNDg1ZTI5NTYzOTYyOTU0MDRhODI5YWRlM2E4NzWcjrTQ: 00:26:16.721 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:16.721 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:16.721 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmU1ZjQ3Yjc4MWZjODdmYjEwYzI0YWQxODIxNzZmNmI5ODEzZWYxM2VjMTZkNWMwDu5arA==: 00:26:16.721 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjJiNDg1ZTI5NTYzOTYyOTU0MDRhODI5YWRlM2E4NzWcjrTQ: ]] 00:26:16.721 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjJiNDg1ZTI5NTYzOTYyOTU0MDRhODI5YWRlM2E4NzWcjrTQ: 00:26:16.721 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:26:16.721 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:16.721 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:16.721 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:16.721 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:16.721 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:16.721 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:16.721 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.721 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.721 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.721 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:16.721 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:16.721 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:16.721 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:16.721 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:16.721 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:16.721 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:16.721 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:16.721 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:16.721 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:16.721 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:16.721 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:16.721 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.721 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.982 nvme0n1 00:26:16.982 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.982 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:16.982 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.982 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:16.982 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.982 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.982 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:16.982 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:16.982 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.982 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.982 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.982 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:16.982 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:26:16.982 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:16.982 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:16.982 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:16.982 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:16.982 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzAxNGNjMjIzOTRjZmE5MzlhMTBjNjQ1ZGM1NjU2YzE0NDc0Y2YxZjFhYTdmZmVmNGI4NmYxODA5YzI2NjU1OFRqrsw=: 00:26:16.982 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:16.982 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:16.982 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:16.982 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzAxNGNjMjIzOTRjZmE5MzlhMTBjNjQ1ZGM1NjU2YzE0NDc0Y2YxZjFhYTdmZmVmNGI4NmYxODA5YzI2NjU1OFRqrsw=: 00:26:16.982 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:16.982 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:26:16.982 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:16.982 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:16.982 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:16.982 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:16.982 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:16.982 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:16.982 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.982 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.982 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.982 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:16.982 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:16.982 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:16.982 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:16.982 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:16.982 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:16.982 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:16.982 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:16.982 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:16.982 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:16.982 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:16.982 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:16.982 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.982 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.278 nvme0n1 00:26:17.278 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.278 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:17.278 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.278 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:17.278 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.278 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.278 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.278 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:17.278 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.278 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.278 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.278 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:17.278 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:17.278 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:26:17.278 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:17.278 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:17.278 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:17.278 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:17.278 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmViYjU5NDgyNTNlNDBkN2ZjMzQzNDQ4MzRkZDNiYTHr+RU7: 00:26:17.278 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTI0YzRhMjA5YWJhMmVhZTZmY2MxZGEyMWJkMWM0ZDQ1MDM0YjMwY2I5MjM1ZTczMDUyMjc3MDk2NzgwM2M5OTBTI5c=: 00:26:17.278 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:17.278 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:17.278 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmViYjU5NDgyNTNlNDBkN2ZjMzQzNDQ4MzRkZDNiYTHr+RU7: 00:26:17.278 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTI0YzRhMjA5YWJhMmVhZTZmY2MxZGEyMWJkMWM0ZDQ1MDM0YjMwY2I5MjM1ZTczMDUyMjc3MDk2NzgwM2M5OTBTI5c=: ]] 00:26:17.278 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTI0YzRhMjA5YWJhMmVhZTZmY2MxZGEyMWJkMWM0ZDQ1MDM0YjMwY2I5MjM1ZTczMDUyMjc3MDk2NzgwM2M5OTBTI5c=: 00:26:17.279 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:26:17.279 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:17.279 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:17.279 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:17.279 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:17.279 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:17.279 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:17.279 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.279 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.279 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.279 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:17.279 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:17.279 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:17.279 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:17.279 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:17.279 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:17.279 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:17.279 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:17.279 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:17.279 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:17.279 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:17.279 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:17.279 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.279 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.538 nvme0n1 00:26:17.538 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.538 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:17.538 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.538 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:17.538 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.797 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.797 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.797 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:17.797 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.797 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.797 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.797 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:17.797 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:26:17.797 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:17.797 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:17.797 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:17.797 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:17.797 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmFlMmQwODQ4OGM5YWNjYjMzMzI0YjRmN2Y1YjMxNTliM2Q3M2EyYzdiMmI1NDU10+GMbA==: 00:26:17.797 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yzg2YTU4ZThkOTMzY2JkNWI3MmNlZDVhMDQ5YWI2ZTFlMDc2ZjA3YzBiYmVhOTBiT+gSUg==: 00:26:17.797 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:17.797 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:17.797 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmFlMmQwODQ4OGM5YWNjYjMzMzI0YjRmN2Y1YjMxNTliM2Q3M2EyYzdiMmI1NDU10+GMbA==: 00:26:17.797 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yzg2YTU4ZThkOTMzY2JkNWI3MmNlZDVhMDQ5YWI2ZTFlMDc2ZjA3YzBiYmVhOTBiT+gSUg==: ]] 00:26:17.797 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yzg2YTU4ZThkOTMzY2JkNWI3MmNlZDVhMDQ5YWI2ZTFlMDc2ZjA3YzBiYmVhOTBiT+gSUg==: 00:26:17.797 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:26:17.797 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:17.797 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:17.797 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:17.797 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:17.797 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:17.797 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:17.797 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.797 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.797 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.797 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:17.797 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:17.797 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:17.797 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:17.797 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:17.797 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:17.797 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:17.797 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:17.797 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:17.797 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:17.797 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:17.797 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:17.797 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.797 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.066 nvme0n1 00:26:18.066 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.066 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:18.066 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.066 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.066 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:18.066 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.066 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:18.066 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:18.066 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.066 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.066 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.066 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:18.066 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:26:18.066 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:18.066 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:18.066 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:18.066 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:18.066 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2I4YjhmZGJmMDc2ZGZiMTZjZDMzMWQ1M2Y3NTYwMmK56u8u: 00:26:18.066 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTdiZmI5YTA3ZDY2YjMxODllYjhlYTMwMjViMThmNjhJ7IzP: 00:26:18.066 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:18.066 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:18.066 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2I4YjhmZGJmMDc2ZGZiMTZjZDMzMWQ1M2Y3NTYwMmK56u8u: 00:26:18.066 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTdiZmI5YTA3ZDY2YjMxODllYjhlYTMwMjViMThmNjhJ7IzP: ]] 00:26:18.066 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTdiZmI5YTA3ZDY2YjMxODllYjhlYTMwMjViMThmNjhJ7IzP: 00:26:18.066 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:26:18.066 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:18.066 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:18.066 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:18.066 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:18.066 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:18.066 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:18.066 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.066 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.066 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.066 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:18.066 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:18.066 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:18.066 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:18.066 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:18.066 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:18.066 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:18.066 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:18.066 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:18.066 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:18.066 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:18.066 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:18.066 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.066 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.325 nvme0n1 00:26:18.325 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.325 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:18.325 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:18.325 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.325 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.325 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.584 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:18.584 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:18.584 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.584 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.584 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.584 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:18.584 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:26:18.584 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:18.584 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:18.584 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:18.584 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:18.584 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmU1ZjQ3Yjc4MWZjODdmYjEwYzI0YWQxODIxNzZmNmI5ODEzZWYxM2VjMTZkNWMwDu5arA==: 00:26:18.584 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjJiNDg1ZTI5NTYzOTYyOTU0MDRhODI5YWRlM2E4NzWcjrTQ: 00:26:18.584 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:18.584 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:18.584 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmU1ZjQ3Yjc4MWZjODdmYjEwYzI0YWQxODIxNzZmNmI5ODEzZWYxM2VjMTZkNWMwDu5arA==: 00:26:18.584 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjJiNDg1ZTI5NTYzOTYyOTU0MDRhODI5YWRlM2E4NzWcjrTQ: ]] 00:26:18.584 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjJiNDg1ZTI5NTYzOTYyOTU0MDRhODI5YWRlM2E4NzWcjrTQ: 00:26:18.584 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:26:18.584 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:18.584 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:18.584 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:18.584 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:18.584 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:18.584 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:18.584 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.584 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.584 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.584 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:18.584 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:18.585 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:18.585 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:18.585 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:18.585 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:18.585 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:18.585 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:18.585 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:18.585 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:18.585 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:18.585 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:18.585 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.585 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.844 nvme0n1 00:26:18.844 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.844 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:18.844 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:18.844 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.844 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.844 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.844 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:18.844 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:18.844 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.844 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.844 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.844 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:18.844 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:26:18.844 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:18.844 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:18.844 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:18.844 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:18.844 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzAxNGNjMjIzOTRjZmE5MzlhMTBjNjQ1ZGM1NjU2YzE0NDc0Y2YxZjFhYTdmZmVmNGI4NmYxODA5YzI2NjU1OFRqrsw=: 00:26:18.844 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:18.844 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:18.844 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:18.844 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzAxNGNjMjIzOTRjZmE5MzlhMTBjNjQ1ZGM1NjU2YzE0NDc0Y2YxZjFhYTdmZmVmNGI4NmYxODA5YzI2NjU1OFRqrsw=: 00:26:18.844 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:18.844 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:26:18.844 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:18.844 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:18.844 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:18.844 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:18.844 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:18.844 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:18.844 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.844 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.845 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.845 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:18.845 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:18.845 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:18.845 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:18.845 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:18.845 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:18.845 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:18.845 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:18.845 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:18.845 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:18.845 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:18.845 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:18.845 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.845 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.104 nvme0n1 00:26:19.104 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.104 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:19.104 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:19.104 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.104 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.104 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.104 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:19.104 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:19.104 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.104 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.363 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.363 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:19.363 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:19.363 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:26:19.363 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:19.363 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:19.363 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:19.363 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:19.363 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmViYjU5NDgyNTNlNDBkN2ZjMzQzNDQ4MzRkZDNiYTHr+RU7: 00:26:19.363 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTI0YzRhMjA5YWJhMmVhZTZmY2MxZGEyMWJkMWM0ZDQ1MDM0YjMwY2I5MjM1ZTczMDUyMjc3MDk2NzgwM2M5OTBTI5c=: 00:26:19.363 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:19.363 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:19.363 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmViYjU5NDgyNTNlNDBkN2ZjMzQzNDQ4MzRkZDNiYTHr+RU7: 00:26:19.363 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTI0YzRhMjA5YWJhMmVhZTZmY2MxZGEyMWJkMWM0ZDQ1MDM0YjMwY2I5MjM1ZTczMDUyMjc3MDk2NzgwM2M5OTBTI5c=: ]] 00:26:19.363 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTI0YzRhMjA5YWJhMmVhZTZmY2MxZGEyMWJkMWM0ZDQ1MDM0YjMwY2I5MjM1ZTczMDUyMjc3MDk2NzgwM2M5OTBTI5c=: 00:26:19.363 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:26:19.363 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:19.363 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:19.363 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:19.363 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:19.363 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:19.364 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:19.364 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.364 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.364 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.364 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:19.364 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:19.364 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:19.364 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:19.364 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:19.364 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:19.364 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:19.364 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:19.364 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:19.364 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:19.364 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:19.364 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:19.364 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.364 14:30:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.931 nvme0n1 00:26:19.931 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.931 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:19.931 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.931 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:19.932 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.932 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.932 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:19.932 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:19.932 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.932 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.932 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.932 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:19.932 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:26:19.932 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:19.932 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:19.932 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:19.932 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:19.932 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmFlMmQwODQ4OGM5YWNjYjMzMzI0YjRmN2Y1YjMxNTliM2Q3M2EyYzdiMmI1NDU10+GMbA==: 00:26:19.932 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yzg2YTU4ZThkOTMzY2JkNWI3MmNlZDVhMDQ5YWI2ZTFlMDc2ZjA3YzBiYmVhOTBiT+gSUg==: 00:26:19.932 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:19.932 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:19.932 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmFlMmQwODQ4OGM5YWNjYjMzMzI0YjRmN2Y1YjMxNTliM2Q3M2EyYzdiMmI1NDU10+GMbA==: 00:26:19.932 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yzg2YTU4ZThkOTMzY2JkNWI3MmNlZDVhMDQ5YWI2ZTFlMDc2ZjA3YzBiYmVhOTBiT+gSUg==: ]] 00:26:19.932 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yzg2YTU4ZThkOTMzY2JkNWI3MmNlZDVhMDQ5YWI2ZTFlMDc2ZjA3YzBiYmVhOTBiT+gSUg==: 00:26:19.932 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:26:19.932 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:19.932 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:19.932 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:19.932 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:19.932 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:19.932 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:19.932 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.932 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.932 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.932 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:19.932 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:19.932 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:19.932 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:19.932 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:19.932 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:19.932 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:19.932 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:19.932 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:19.932 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:19.932 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:19.932 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:19.932 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.932 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.500 nvme0n1 00:26:20.500 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.500 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:20.500 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:20.500 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.500 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.500 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.500 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:20.500 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:20.500 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.500 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.500 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.500 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:20.500 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:26:20.500 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:20.500 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:20.500 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:20.500 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:20.500 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2I4YjhmZGJmMDc2ZGZiMTZjZDMzMWQ1M2Y3NTYwMmK56u8u: 00:26:20.500 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTdiZmI5YTA3ZDY2YjMxODllYjhlYTMwMjViMThmNjhJ7IzP: 00:26:20.500 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:20.500 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:20.500 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2I4YjhmZGJmMDc2ZGZiMTZjZDMzMWQ1M2Y3NTYwMmK56u8u: 00:26:20.500 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTdiZmI5YTA3ZDY2YjMxODllYjhlYTMwMjViMThmNjhJ7IzP: ]] 00:26:20.500 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTdiZmI5YTA3ZDY2YjMxODllYjhlYTMwMjViMThmNjhJ7IzP: 00:26:20.500 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:26:20.500 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:20.500 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:20.500 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:20.500 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:20.500 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:20.500 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:20.500 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.500 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.500 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.500 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:20.500 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:20.500 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:20.500 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:20.500 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:20.500 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:20.500 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:20.500 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:20.501 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:20.501 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:20.501 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:20.501 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:20.501 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.501 14:30:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.070 nvme0n1 00:26:21.070 14:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.070 14:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:21.070 14:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:21.070 14:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.070 14:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.070 14:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.070 14:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:21.070 14:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:21.070 14:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.070 14:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.070 14:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.070 14:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:21.070 14:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:26:21.070 14:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:21.070 14:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:21.070 14:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:21.070 14:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:21.070 14:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmU1ZjQ3Yjc4MWZjODdmYjEwYzI0YWQxODIxNzZmNmI5ODEzZWYxM2VjMTZkNWMwDu5arA==: 00:26:21.070 14:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjJiNDg1ZTI5NTYzOTYyOTU0MDRhODI5YWRlM2E4NzWcjrTQ: 00:26:21.070 14:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:21.070 14:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:21.070 14:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmU1ZjQ3Yjc4MWZjODdmYjEwYzI0YWQxODIxNzZmNmI5ODEzZWYxM2VjMTZkNWMwDu5arA==: 00:26:21.070 14:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjJiNDg1ZTI5NTYzOTYyOTU0MDRhODI5YWRlM2E4NzWcjrTQ: ]] 00:26:21.070 14:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjJiNDg1ZTI5NTYzOTYyOTU0MDRhODI5YWRlM2E4NzWcjrTQ: 00:26:21.070 14:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:26:21.070 14:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:21.070 14:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:21.070 14:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:21.070 14:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:21.070 14:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:21.070 14:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:21.070 14:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.070 14:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.070 14:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.070 14:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:21.070 14:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:21.070 14:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:21.070 14:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:21.070 14:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:21.070 14:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:21.070 14:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:21.070 14:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:21.070 14:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:21.070 14:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:21.070 14:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:21.070 14:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:21.070 14:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.070 14:30:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.640 nvme0n1 00:26:21.640 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.640 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:21.640 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.640 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:21.640 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.640 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.640 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:21.640 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:21.640 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.640 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.640 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.640 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:21.640 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:26:21.640 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:21.640 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:21.640 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:21.640 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:21.640 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzAxNGNjMjIzOTRjZmE5MzlhMTBjNjQ1ZGM1NjU2YzE0NDc0Y2YxZjFhYTdmZmVmNGI4NmYxODA5YzI2NjU1OFRqrsw=: 00:26:21.640 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:21.640 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:21.640 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:21.640 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzAxNGNjMjIzOTRjZmE5MzlhMTBjNjQ1ZGM1NjU2YzE0NDc0Y2YxZjFhYTdmZmVmNGI4NmYxODA5YzI2NjU1OFRqrsw=: 00:26:21.640 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:21.640 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:26:21.640 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:21.640 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:21.640 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:21.640 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:21.640 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:21.640 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:21.640 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.640 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.640 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.640 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:21.640 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:21.640 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:21.640 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:21.640 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:21.640 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:21.640 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:21.640 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:21.640 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:21.640 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:21.640 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:21.640 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:21.640 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.640 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.208 nvme0n1 00:26:22.208 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.208 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:22.208 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.208 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:22.208 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.208 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.208 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:22.208 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:22.208 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.208 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.208 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.208 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:22.208 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:22.208 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:22.208 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:26:22.208 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:22.208 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:22.208 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:22.208 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:22.208 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmViYjU5NDgyNTNlNDBkN2ZjMzQzNDQ4MzRkZDNiYTHr+RU7: 00:26:22.208 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTI0YzRhMjA5YWJhMmVhZTZmY2MxZGEyMWJkMWM0ZDQ1MDM0YjMwY2I5MjM1ZTczMDUyMjc3MDk2NzgwM2M5OTBTI5c=: 00:26:22.208 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:22.208 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:22.208 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmViYjU5NDgyNTNlNDBkN2ZjMzQzNDQ4MzRkZDNiYTHr+RU7: 00:26:22.208 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTI0YzRhMjA5YWJhMmVhZTZmY2MxZGEyMWJkMWM0ZDQ1MDM0YjMwY2I5MjM1ZTczMDUyMjc3MDk2NzgwM2M5OTBTI5c=: ]] 00:26:22.208 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTI0YzRhMjA5YWJhMmVhZTZmY2MxZGEyMWJkMWM0ZDQ1MDM0YjMwY2I5MjM1ZTczMDUyMjc3MDk2NzgwM2M5OTBTI5c=: 00:26:22.208 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:26:22.208 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:22.208 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:22.208 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:22.208 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:22.208 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:22.208 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:22.208 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.208 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.208 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.208 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:22.208 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:22.208 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:22.208 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:22.208 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:22.208 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:22.208 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:22.208 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:22.208 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:22.208 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:22.208 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:22.208 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:22.208 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.208 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.208 nvme0n1 00:26:22.208 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.208 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:22.208 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:22.208 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.208 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.468 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.468 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:22.468 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:22.468 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.468 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.468 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.468 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:22.468 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:26:22.468 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:22.468 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:22.468 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:22.468 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:22.468 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmFlMmQwODQ4OGM5YWNjYjMzMzI0YjRmN2Y1YjMxNTliM2Q3M2EyYzdiMmI1NDU10+GMbA==: 00:26:22.468 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yzg2YTU4ZThkOTMzY2JkNWI3MmNlZDVhMDQ5YWI2ZTFlMDc2ZjA3YzBiYmVhOTBiT+gSUg==: 00:26:22.468 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:22.468 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:22.468 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmFlMmQwODQ4OGM5YWNjYjMzMzI0YjRmN2Y1YjMxNTliM2Q3M2EyYzdiMmI1NDU10+GMbA==: 00:26:22.468 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yzg2YTU4ZThkOTMzY2JkNWI3MmNlZDVhMDQ5YWI2ZTFlMDc2ZjA3YzBiYmVhOTBiT+gSUg==: ]] 00:26:22.468 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yzg2YTU4ZThkOTMzY2JkNWI3MmNlZDVhMDQ5YWI2ZTFlMDc2ZjA3YzBiYmVhOTBiT+gSUg==: 00:26:22.468 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:26:22.468 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:22.468 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:22.468 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:22.468 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:22.468 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:22.468 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:22.468 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.468 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.468 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.468 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:22.468 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:22.468 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:22.468 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:22.468 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:22.468 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:22.468 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:22.468 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:22.468 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:22.468 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:22.468 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:22.468 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:22.469 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.469 14:30:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.469 nvme0n1 00:26:22.469 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.469 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:22.469 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:22.469 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.469 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.469 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.469 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:22.469 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:22.469 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.469 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.469 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.469 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:22.469 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:26:22.469 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:22.469 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:22.469 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:22.469 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:22.469 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2I4YjhmZGJmMDc2ZGZiMTZjZDMzMWQ1M2Y3NTYwMmK56u8u: 00:26:22.469 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTdiZmI5YTA3ZDY2YjMxODllYjhlYTMwMjViMThmNjhJ7IzP: 00:26:22.469 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:22.469 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:22.469 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2I4YjhmZGJmMDc2ZGZiMTZjZDMzMWQ1M2Y3NTYwMmK56u8u: 00:26:22.469 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTdiZmI5YTA3ZDY2YjMxODllYjhlYTMwMjViMThmNjhJ7IzP: ]] 00:26:22.469 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTdiZmI5YTA3ZDY2YjMxODllYjhlYTMwMjViMThmNjhJ7IzP: 00:26:22.469 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:26:22.469 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:22.469 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:22.469 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:22.469 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:22.469 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:22.469 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:22.469 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.469 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.469 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.469 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:22.469 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:22.469 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:22.469 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:22.469 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:22.469 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:22.469 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:22.469 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:22.469 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:22.469 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:22.469 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:22.469 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:22.469 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.469 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.728 nvme0n1 00:26:22.728 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.728 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:22.728 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.728 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:22.728 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.728 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.728 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:22.728 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:22.728 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.728 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.728 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.728 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:22.728 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:26:22.728 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:22.728 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:22.728 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:22.728 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:22.728 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmU1ZjQ3Yjc4MWZjODdmYjEwYzI0YWQxODIxNzZmNmI5ODEzZWYxM2VjMTZkNWMwDu5arA==: 00:26:22.728 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjJiNDg1ZTI5NTYzOTYyOTU0MDRhODI5YWRlM2E4NzWcjrTQ: 00:26:22.728 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:22.728 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:22.728 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmU1ZjQ3Yjc4MWZjODdmYjEwYzI0YWQxODIxNzZmNmI5ODEzZWYxM2VjMTZkNWMwDu5arA==: 00:26:22.728 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjJiNDg1ZTI5NTYzOTYyOTU0MDRhODI5YWRlM2E4NzWcjrTQ: ]] 00:26:22.728 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjJiNDg1ZTI5NTYzOTYyOTU0MDRhODI5YWRlM2E4NzWcjrTQ: 00:26:22.728 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:26:22.728 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:22.728 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:22.728 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:22.728 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:22.728 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:22.728 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:22.728 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.728 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.728 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.728 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:22.728 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:22.728 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:22.728 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:22.728 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:22.728 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:22.728 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:22.728 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:22.728 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:22.728 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:22.728 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:22.728 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:22.728 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.728 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.988 nvme0n1 00:26:22.988 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.988 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:22.988 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:22.988 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.988 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.988 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.988 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:22.988 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:22.988 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.988 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.988 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.988 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:22.988 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:26:22.988 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:22.988 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:22.988 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:22.988 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:22.988 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzAxNGNjMjIzOTRjZmE5MzlhMTBjNjQ1ZGM1NjU2YzE0NDc0Y2YxZjFhYTdmZmVmNGI4NmYxODA5YzI2NjU1OFRqrsw=: 00:26:22.988 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:22.988 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:22.988 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:22.988 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzAxNGNjMjIzOTRjZmE5MzlhMTBjNjQ1ZGM1NjU2YzE0NDc0Y2YxZjFhYTdmZmVmNGI4NmYxODA5YzI2NjU1OFRqrsw=: 00:26:22.988 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:22.988 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:26:22.988 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:22.988 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:22.988 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:22.988 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:22.988 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:22.988 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:22.988 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.988 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.988 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.988 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:22.988 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:22.988 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:22.988 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:22.988 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:22.988 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:22.988 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:22.988 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:22.988 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:22.988 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:22.988 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:22.988 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:22.988 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.988 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.988 nvme0n1 00:26:22.988 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.988 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:22.988 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:22.988 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.988 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.988 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.988 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:22.988 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:22.988 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.988 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.988 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.988 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:22.988 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:22.989 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:26:22.989 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:22.989 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:22.989 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:22.989 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:22.989 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmViYjU5NDgyNTNlNDBkN2ZjMzQzNDQ4MzRkZDNiYTHr+RU7: 00:26:22.989 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTI0YzRhMjA5YWJhMmVhZTZmY2MxZGEyMWJkMWM0ZDQ1MDM0YjMwY2I5MjM1ZTczMDUyMjc3MDk2NzgwM2M5OTBTI5c=: 00:26:22.989 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:22.989 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:22.989 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmViYjU5NDgyNTNlNDBkN2ZjMzQzNDQ4MzRkZDNiYTHr+RU7: 00:26:22.989 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTI0YzRhMjA5YWJhMmVhZTZmY2MxZGEyMWJkMWM0ZDQ1MDM0YjMwY2I5MjM1ZTczMDUyMjc3MDk2NzgwM2M5OTBTI5c=: ]] 00:26:22.989 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTI0YzRhMjA5YWJhMmVhZTZmY2MxZGEyMWJkMWM0ZDQ1MDM0YjMwY2I5MjM1ZTczMDUyMjc3MDk2NzgwM2M5OTBTI5c=: 00:26:22.989 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:26:22.989 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:22.989 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:22.989 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:22.989 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:22.989 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:22.989 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:22.989 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.989 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.249 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.249 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:23.249 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:23.249 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:23.249 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:23.249 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:23.249 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:23.249 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:23.249 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:23.249 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:23.249 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:23.249 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:23.249 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:23.249 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.249 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.249 nvme0n1 00:26:23.249 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.249 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:23.249 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.249 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:23.249 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.249 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.249 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:23.249 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:23.249 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.249 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.249 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.249 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:23.249 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:26:23.249 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:23.249 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:23.249 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:23.249 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:23.249 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmFlMmQwODQ4OGM5YWNjYjMzMzI0YjRmN2Y1YjMxNTliM2Q3M2EyYzdiMmI1NDU10+GMbA==: 00:26:23.249 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yzg2YTU4ZThkOTMzY2JkNWI3MmNlZDVhMDQ5YWI2ZTFlMDc2ZjA3YzBiYmVhOTBiT+gSUg==: 00:26:23.249 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:23.249 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:23.249 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmFlMmQwODQ4OGM5YWNjYjMzMzI0YjRmN2Y1YjMxNTliM2Q3M2EyYzdiMmI1NDU10+GMbA==: 00:26:23.249 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yzg2YTU4ZThkOTMzY2JkNWI3MmNlZDVhMDQ5YWI2ZTFlMDc2ZjA3YzBiYmVhOTBiT+gSUg==: ]] 00:26:23.249 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yzg2YTU4ZThkOTMzY2JkNWI3MmNlZDVhMDQ5YWI2ZTFlMDc2ZjA3YzBiYmVhOTBiT+gSUg==: 00:26:23.249 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:26:23.249 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:23.249 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:23.249 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:23.249 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:23.249 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:23.249 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:23.249 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.249 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.249 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.249 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:23.249 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:23.249 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:23.249 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:23.249 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:23.249 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:23.249 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:23.249 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:23.249 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:23.249 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:23.249 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:23.249 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:23.249 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.249 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.509 nvme0n1 00:26:23.509 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.509 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:23.509 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.509 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:23.509 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.509 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.509 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:23.509 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:23.509 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.509 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.509 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.509 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:23.509 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:26:23.509 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:23.509 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:23.509 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:23.509 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:23.509 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2I4YjhmZGJmMDc2ZGZiMTZjZDMzMWQ1M2Y3NTYwMmK56u8u: 00:26:23.509 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTdiZmI5YTA3ZDY2YjMxODllYjhlYTMwMjViMThmNjhJ7IzP: 00:26:23.509 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:23.509 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:23.509 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2I4YjhmZGJmMDc2ZGZiMTZjZDMzMWQ1M2Y3NTYwMmK56u8u: 00:26:23.509 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTdiZmI5YTA3ZDY2YjMxODllYjhlYTMwMjViMThmNjhJ7IzP: ]] 00:26:23.509 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTdiZmI5YTA3ZDY2YjMxODllYjhlYTMwMjViMThmNjhJ7IzP: 00:26:23.509 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:26:23.509 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:23.509 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:23.509 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:23.509 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:23.509 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:23.509 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:23.509 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.509 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.509 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.509 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:23.509 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:23.509 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:23.509 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:23.509 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:23.509 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:23.509 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:23.509 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:23.509 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:23.509 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:23.509 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:23.509 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:23.509 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.509 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.768 nvme0n1 00:26:23.768 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.768 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:23.768 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:23.769 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.769 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.769 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.769 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:23.769 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:23.769 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.769 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.769 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.769 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:23.769 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:26:23.769 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:23.769 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:23.769 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:23.769 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:23.769 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmU1ZjQ3Yjc4MWZjODdmYjEwYzI0YWQxODIxNzZmNmI5ODEzZWYxM2VjMTZkNWMwDu5arA==: 00:26:23.769 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjJiNDg1ZTI5NTYzOTYyOTU0MDRhODI5YWRlM2E4NzWcjrTQ: 00:26:23.769 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:23.769 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:23.769 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmU1ZjQ3Yjc4MWZjODdmYjEwYzI0YWQxODIxNzZmNmI5ODEzZWYxM2VjMTZkNWMwDu5arA==: 00:26:23.769 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjJiNDg1ZTI5NTYzOTYyOTU0MDRhODI5YWRlM2E4NzWcjrTQ: ]] 00:26:23.769 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjJiNDg1ZTI5NTYzOTYyOTU0MDRhODI5YWRlM2E4NzWcjrTQ: 00:26:23.769 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:26:23.769 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:23.769 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:23.769 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:23.769 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:23.769 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:23.769 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:23.769 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.769 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.769 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.769 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:23.769 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:23.769 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:23.769 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:23.769 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:23.769 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:23.769 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:23.769 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:23.769 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:23.769 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:23.769 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:23.769 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:23.769 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.769 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.769 nvme0n1 00:26:23.769 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.769 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:23.769 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:23.769 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.769 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.769 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.029 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:24.029 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:24.029 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.029 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.029 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.029 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:24.029 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:26:24.029 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:24.029 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:24.029 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:24.029 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:24.029 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzAxNGNjMjIzOTRjZmE5MzlhMTBjNjQ1ZGM1NjU2YzE0NDc0Y2YxZjFhYTdmZmVmNGI4NmYxODA5YzI2NjU1OFRqrsw=: 00:26:24.029 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:24.029 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:24.029 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:24.029 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzAxNGNjMjIzOTRjZmE5MzlhMTBjNjQ1ZGM1NjU2YzE0NDc0Y2YxZjFhYTdmZmVmNGI4NmYxODA5YzI2NjU1OFRqrsw=: 00:26:24.029 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:24.029 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:26:24.029 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:24.029 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:24.029 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:24.029 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:24.029 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:24.029 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:24.029 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.029 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.029 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.029 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:24.029 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:24.029 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:24.029 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:24.029 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:24.029 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:24.029 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:24.029 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:24.029 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:24.029 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:24.029 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:24.029 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:24.029 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.029 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.029 nvme0n1 00:26:24.029 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.029 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:24.029 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:24.029 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.029 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.029 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.029 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:24.029 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:24.029 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.029 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.029 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.029 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:24.029 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:24.029 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:26:24.029 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:24.029 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:24.029 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:24.029 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:24.029 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmViYjU5NDgyNTNlNDBkN2ZjMzQzNDQ4MzRkZDNiYTHr+RU7: 00:26:24.029 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTI0YzRhMjA5YWJhMmVhZTZmY2MxZGEyMWJkMWM0ZDQ1MDM0YjMwY2I5MjM1ZTczMDUyMjc3MDk2NzgwM2M5OTBTI5c=: 00:26:24.029 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:24.029 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:24.029 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmViYjU5NDgyNTNlNDBkN2ZjMzQzNDQ4MzRkZDNiYTHr+RU7: 00:26:24.029 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTI0YzRhMjA5YWJhMmVhZTZmY2MxZGEyMWJkMWM0ZDQ1MDM0YjMwY2I5MjM1ZTczMDUyMjc3MDk2NzgwM2M5OTBTI5c=: ]] 00:26:24.029 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTI0YzRhMjA5YWJhMmVhZTZmY2MxZGEyMWJkMWM0ZDQ1MDM0YjMwY2I5MjM1ZTczMDUyMjc3MDk2NzgwM2M5OTBTI5c=: 00:26:24.029 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:26:24.029 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:24.029 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:24.029 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:24.029 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:24.029 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:24.029 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:24.029 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.029 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.029 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.029 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:24.029 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:24.029 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:24.029 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:24.029 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:24.030 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:24.030 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:24.030 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:24.030 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:24.030 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:24.030 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:24.030 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:24.030 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.030 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.289 nvme0n1 00:26:24.289 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.289 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:24.289 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:24.289 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.289 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.289 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.289 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:24.289 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:24.289 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.289 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.289 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.289 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:24.289 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:26:24.289 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:24.289 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:24.289 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:24.289 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:24.289 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmFlMmQwODQ4OGM5YWNjYjMzMzI0YjRmN2Y1YjMxNTliM2Q3M2EyYzdiMmI1NDU10+GMbA==: 00:26:24.289 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yzg2YTU4ZThkOTMzY2JkNWI3MmNlZDVhMDQ5YWI2ZTFlMDc2ZjA3YzBiYmVhOTBiT+gSUg==: 00:26:24.289 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:24.289 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:24.289 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmFlMmQwODQ4OGM5YWNjYjMzMzI0YjRmN2Y1YjMxNTliM2Q3M2EyYzdiMmI1NDU10+GMbA==: 00:26:24.289 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yzg2YTU4ZThkOTMzY2JkNWI3MmNlZDVhMDQ5YWI2ZTFlMDc2ZjA3YzBiYmVhOTBiT+gSUg==: ]] 00:26:24.289 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yzg2YTU4ZThkOTMzY2JkNWI3MmNlZDVhMDQ5YWI2ZTFlMDc2ZjA3YzBiYmVhOTBiT+gSUg==: 00:26:24.289 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:26:24.289 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:24.289 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:24.289 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:24.289 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:24.290 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:24.290 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:24.290 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.290 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.290 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.290 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:24.290 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:24.290 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:24.290 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:24.290 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:24.290 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:24.290 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:24.290 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:24.290 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:24.290 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:24.290 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:24.290 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:24.290 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.290 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.549 nvme0n1 00:26:24.549 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.549 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:24.549 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.549 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:24.549 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.549 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.549 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:24.549 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:24.549 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.549 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.549 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.549 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:24.549 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:26:24.549 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:24.549 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:24.549 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:24.549 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:24.549 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2I4YjhmZGJmMDc2ZGZiMTZjZDMzMWQ1M2Y3NTYwMmK56u8u: 00:26:24.549 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTdiZmI5YTA3ZDY2YjMxODllYjhlYTMwMjViMThmNjhJ7IzP: 00:26:24.549 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:24.549 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:24.549 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2I4YjhmZGJmMDc2ZGZiMTZjZDMzMWQ1M2Y3NTYwMmK56u8u: 00:26:24.549 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTdiZmI5YTA3ZDY2YjMxODllYjhlYTMwMjViMThmNjhJ7IzP: ]] 00:26:24.549 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTdiZmI5YTA3ZDY2YjMxODllYjhlYTMwMjViMThmNjhJ7IzP: 00:26:24.549 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:26:24.549 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:24.549 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:24.549 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:24.549 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:24.549 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:24.549 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:24.549 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.549 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.549 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.549 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:24.549 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:24.549 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:24.549 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:24.549 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:24.549 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:24.549 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:24.549 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:24.549 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:24.549 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:24.549 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:24.549 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:24.549 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.549 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.809 nvme0n1 00:26:24.809 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.809 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:24.809 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:24.809 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.809 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.809 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.809 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:24.809 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:24.809 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.809 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.809 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.809 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:24.809 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:26:24.809 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:24.809 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:24.809 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:24.809 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:24.809 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmU1ZjQ3Yjc4MWZjODdmYjEwYzI0YWQxODIxNzZmNmI5ODEzZWYxM2VjMTZkNWMwDu5arA==: 00:26:24.809 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjJiNDg1ZTI5NTYzOTYyOTU0MDRhODI5YWRlM2E4NzWcjrTQ: 00:26:24.809 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:24.809 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:24.809 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmU1ZjQ3Yjc4MWZjODdmYjEwYzI0YWQxODIxNzZmNmI5ODEzZWYxM2VjMTZkNWMwDu5arA==: 00:26:24.809 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjJiNDg1ZTI5NTYzOTYyOTU0MDRhODI5YWRlM2E4NzWcjrTQ: ]] 00:26:24.809 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjJiNDg1ZTI5NTYzOTYyOTU0MDRhODI5YWRlM2E4NzWcjrTQ: 00:26:24.809 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:26:24.809 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:24.809 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:24.809 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:24.809 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:24.809 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:24.809 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:24.809 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.809 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.809 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.809 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:24.809 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:24.809 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:24.809 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:24.809 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:24.809 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:24.809 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:24.809 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:24.809 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:24.809 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:24.809 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:24.809 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:24.809 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.809 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.069 nvme0n1 00:26:25.069 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.069 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:25.069 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.069 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:25.069 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.069 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.069 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:25.069 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:25.069 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.069 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.069 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.069 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:25.069 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:26:25.069 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:25.069 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:25.069 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:25.069 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:25.069 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzAxNGNjMjIzOTRjZmE5MzlhMTBjNjQ1ZGM1NjU2YzE0NDc0Y2YxZjFhYTdmZmVmNGI4NmYxODA5YzI2NjU1OFRqrsw=: 00:26:25.069 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:25.069 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:25.069 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:25.069 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzAxNGNjMjIzOTRjZmE5MzlhMTBjNjQ1ZGM1NjU2YzE0NDc0Y2YxZjFhYTdmZmVmNGI4NmYxODA5YzI2NjU1OFRqrsw=: 00:26:25.069 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:25.069 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:26:25.069 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:25.069 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:25.069 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:25.069 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:25.069 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:25.069 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:25.069 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.069 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.069 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.069 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:25.069 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:25.069 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:25.069 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:25.069 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:25.069 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:25.069 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:25.069 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:25.069 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:25.069 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:25.069 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:25.069 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:25.069 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.069 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.329 nvme0n1 00:26:25.329 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.329 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:25.329 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:25.329 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.329 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.329 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.329 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:25.329 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:25.329 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.329 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.329 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.329 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:25.329 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:25.329 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:26:25.329 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:25.329 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:25.329 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:25.329 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:25.329 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmViYjU5NDgyNTNlNDBkN2ZjMzQzNDQ4MzRkZDNiYTHr+RU7: 00:26:25.329 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTI0YzRhMjA5YWJhMmVhZTZmY2MxZGEyMWJkMWM0ZDQ1MDM0YjMwY2I5MjM1ZTczMDUyMjc3MDk2NzgwM2M5OTBTI5c=: 00:26:25.329 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:25.329 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:25.329 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmViYjU5NDgyNTNlNDBkN2ZjMzQzNDQ4MzRkZDNiYTHr+RU7: 00:26:25.329 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTI0YzRhMjA5YWJhMmVhZTZmY2MxZGEyMWJkMWM0ZDQ1MDM0YjMwY2I5MjM1ZTczMDUyMjc3MDk2NzgwM2M5OTBTI5c=: ]] 00:26:25.329 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTI0YzRhMjA5YWJhMmVhZTZmY2MxZGEyMWJkMWM0ZDQ1MDM0YjMwY2I5MjM1ZTczMDUyMjc3MDk2NzgwM2M5OTBTI5c=: 00:26:25.329 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:26:25.329 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:25.329 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:25.329 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:25.329 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:25.329 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:25.329 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:25.329 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.329 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.329 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.329 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:25.329 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:25.329 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:25.329 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:25.329 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:25.329 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:25.329 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:25.329 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:25.329 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:25.329 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:25.329 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:25.329 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:25.329 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.329 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.589 nvme0n1 00:26:25.589 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.848 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:25.848 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.848 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:25.848 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.848 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.848 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:25.848 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:25.848 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.848 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.848 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.848 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:25.848 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:26:25.848 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:25.848 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:25.848 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:25.848 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:25.848 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmFlMmQwODQ4OGM5YWNjYjMzMzI0YjRmN2Y1YjMxNTliM2Q3M2EyYzdiMmI1NDU10+GMbA==: 00:26:25.848 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yzg2YTU4ZThkOTMzY2JkNWI3MmNlZDVhMDQ5YWI2ZTFlMDc2ZjA3YzBiYmVhOTBiT+gSUg==: 00:26:25.848 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:25.848 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:25.848 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmFlMmQwODQ4OGM5YWNjYjMzMzI0YjRmN2Y1YjMxNTliM2Q3M2EyYzdiMmI1NDU10+GMbA==: 00:26:25.848 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yzg2YTU4ZThkOTMzY2JkNWI3MmNlZDVhMDQ5YWI2ZTFlMDc2ZjA3YzBiYmVhOTBiT+gSUg==: ]] 00:26:25.848 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yzg2YTU4ZThkOTMzY2JkNWI3MmNlZDVhMDQ5YWI2ZTFlMDc2ZjA3YzBiYmVhOTBiT+gSUg==: 00:26:25.848 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:26:25.848 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:25.848 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:25.848 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:25.848 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:25.848 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:25.848 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:25.848 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.848 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.848 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.848 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:25.848 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:25.848 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:25.848 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:25.849 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:25.849 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:25.849 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:25.849 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:25.849 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:25.849 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:25.849 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:25.849 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:25.849 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.849 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.108 nvme0n1 00:26:26.108 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.108 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:26.109 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:26.109 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.109 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.109 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.109 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:26.109 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:26.109 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.109 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.109 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.109 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:26.109 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:26:26.109 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:26.109 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:26.109 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:26.109 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:26.109 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2I4YjhmZGJmMDc2ZGZiMTZjZDMzMWQ1M2Y3NTYwMmK56u8u: 00:26:26.109 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTdiZmI5YTA3ZDY2YjMxODllYjhlYTMwMjViMThmNjhJ7IzP: 00:26:26.109 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:26.109 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:26.109 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2I4YjhmZGJmMDc2ZGZiMTZjZDMzMWQ1M2Y3NTYwMmK56u8u: 00:26:26.109 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTdiZmI5YTA3ZDY2YjMxODllYjhlYTMwMjViMThmNjhJ7IzP: ]] 00:26:26.109 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTdiZmI5YTA3ZDY2YjMxODllYjhlYTMwMjViMThmNjhJ7IzP: 00:26:26.109 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:26:26.109 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:26.109 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:26.109 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:26.109 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:26.109 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:26.109 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:26.109 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.109 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.109 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.109 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:26.109 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:26.109 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:26.109 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:26.109 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:26.109 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:26.109 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:26.109 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:26.109 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:26.109 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:26.109 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:26.109 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:26.109 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.109 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.368 nvme0n1 00:26:26.368 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.368 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:26.368 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.368 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:26.368 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.628 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.628 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:26.628 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:26.628 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.628 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.628 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.628 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:26.628 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:26:26.628 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:26.628 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:26.628 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:26.628 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:26.628 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmU1ZjQ3Yjc4MWZjODdmYjEwYzI0YWQxODIxNzZmNmI5ODEzZWYxM2VjMTZkNWMwDu5arA==: 00:26:26.628 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjJiNDg1ZTI5NTYzOTYyOTU0MDRhODI5YWRlM2E4NzWcjrTQ: 00:26:26.628 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:26.628 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:26.628 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmU1ZjQ3Yjc4MWZjODdmYjEwYzI0YWQxODIxNzZmNmI5ODEzZWYxM2VjMTZkNWMwDu5arA==: 00:26:26.628 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjJiNDg1ZTI5NTYzOTYyOTU0MDRhODI5YWRlM2E4NzWcjrTQ: ]] 00:26:26.628 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjJiNDg1ZTI5NTYzOTYyOTU0MDRhODI5YWRlM2E4NzWcjrTQ: 00:26:26.628 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:26:26.628 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:26.628 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:26.628 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:26.628 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:26.628 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:26.628 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:26.628 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.628 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.628 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.628 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:26.628 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:26.628 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:26.628 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:26.628 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:26.628 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:26.628 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:26.628 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:26.628 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:26.628 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:26.628 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:26.628 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:26.628 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.628 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.888 nvme0n1 00:26:26.888 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.888 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:26.888 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.888 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:26.888 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.888 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.888 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:26.888 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:26.888 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.888 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.888 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.888 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:26.888 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:26:26.888 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:26.888 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:26.888 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:26.888 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:26.888 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzAxNGNjMjIzOTRjZmE5MzlhMTBjNjQ1ZGM1NjU2YzE0NDc0Y2YxZjFhYTdmZmVmNGI4NmYxODA5YzI2NjU1OFRqrsw=: 00:26:26.888 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:26.888 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:26.888 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:26.888 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzAxNGNjMjIzOTRjZmE5MzlhMTBjNjQ1ZGM1NjU2YzE0NDc0Y2YxZjFhYTdmZmVmNGI4NmYxODA5YzI2NjU1OFRqrsw=: 00:26:26.888 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:26.888 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:26:26.888 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:26.888 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:26.888 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:26.888 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:26.888 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:26.888 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:26.888 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.888 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.888 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.888 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:26.888 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:26.888 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:26.888 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:26.888 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:26.888 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:26.888 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:26.888 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:26.888 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:26.888 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:26.888 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:26.888 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:26.888 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.888 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.167 nvme0n1 00:26:27.167 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.167 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:27.167 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.167 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:27.167 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.167 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.167 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:27.167 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:27.167 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.167 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.427 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.427 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:27.427 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:27.427 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:26:27.427 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:27.427 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:27.427 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:27.427 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:27.427 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmViYjU5NDgyNTNlNDBkN2ZjMzQzNDQ4MzRkZDNiYTHr+RU7: 00:26:27.427 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTI0YzRhMjA5YWJhMmVhZTZmY2MxZGEyMWJkMWM0ZDQ1MDM0YjMwY2I5MjM1ZTczMDUyMjc3MDk2NzgwM2M5OTBTI5c=: 00:26:27.427 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:27.427 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:27.427 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmViYjU5NDgyNTNlNDBkN2ZjMzQzNDQ4MzRkZDNiYTHr+RU7: 00:26:27.427 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTI0YzRhMjA5YWJhMmVhZTZmY2MxZGEyMWJkMWM0ZDQ1MDM0YjMwY2I5MjM1ZTczMDUyMjc3MDk2NzgwM2M5OTBTI5c=: ]] 00:26:27.427 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTI0YzRhMjA5YWJhMmVhZTZmY2MxZGEyMWJkMWM0ZDQ1MDM0YjMwY2I5MjM1ZTczMDUyMjc3MDk2NzgwM2M5OTBTI5c=: 00:26:27.427 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:26:27.427 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:27.427 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:27.427 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:27.427 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:27.427 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:27.427 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:27.427 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.427 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.427 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.427 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:27.427 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:27.427 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:27.427 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:27.427 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:27.427 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:27.427 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:27.427 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:27.427 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:27.427 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:27.427 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:27.427 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:27.427 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.427 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.996 nvme0n1 00:26:27.996 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.996 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:27.996 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.996 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:27.996 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.996 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.996 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:27.996 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:27.996 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.996 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.996 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.996 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:27.996 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:26:27.996 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:27.996 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:27.996 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:27.996 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:27.996 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmFlMmQwODQ4OGM5YWNjYjMzMzI0YjRmN2Y1YjMxNTliM2Q3M2EyYzdiMmI1NDU10+GMbA==: 00:26:27.996 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yzg2YTU4ZThkOTMzY2JkNWI3MmNlZDVhMDQ5YWI2ZTFlMDc2ZjA3YzBiYmVhOTBiT+gSUg==: 00:26:27.996 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:27.996 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:27.996 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmFlMmQwODQ4OGM5YWNjYjMzMzI0YjRmN2Y1YjMxNTliM2Q3M2EyYzdiMmI1NDU10+GMbA==: 00:26:27.996 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yzg2YTU4ZThkOTMzY2JkNWI3MmNlZDVhMDQ5YWI2ZTFlMDc2ZjA3YzBiYmVhOTBiT+gSUg==: ]] 00:26:27.996 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yzg2YTU4ZThkOTMzY2JkNWI3MmNlZDVhMDQ5YWI2ZTFlMDc2ZjA3YzBiYmVhOTBiT+gSUg==: 00:26:27.996 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:26:27.996 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:27.996 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:27.996 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:27.996 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:27.996 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:27.996 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:27.996 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.996 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.996 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.996 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:27.996 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:27.996 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:27.996 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:27.996 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:27.996 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:27.996 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:27.996 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:27.996 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:27.996 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:27.996 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:27.996 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:27.996 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.996 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.564 nvme0n1 00:26:28.564 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.564 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:28.564 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.564 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.564 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:28.564 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.564 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:28.564 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:28.564 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.564 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.564 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.564 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:28.564 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:26:28.564 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:28.564 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:28.564 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:28.564 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:28.564 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2I4YjhmZGJmMDc2ZGZiMTZjZDMzMWQ1M2Y3NTYwMmK56u8u: 00:26:28.564 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTdiZmI5YTA3ZDY2YjMxODllYjhlYTMwMjViMThmNjhJ7IzP: 00:26:28.564 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:28.564 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:28.564 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2I4YjhmZGJmMDc2ZGZiMTZjZDMzMWQ1M2Y3NTYwMmK56u8u: 00:26:28.564 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTdiZmI5YTA3ZDY2YjMxODllYjhlYTMwMjViMThmNjhJ7IzP: ]] 00:26:28.564 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTdiZmI5YTA3ZDY2YjMxODllYjhlYTMwMjViMThmNjhJ7IzP: 00:26:28.564 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:26:28.564 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:28.564 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:28.564 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:28.564 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:28.564 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:28.564 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:28.564 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.564 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.564 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.564 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:28.564 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:28.564 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:28.564 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:28.564 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:28.564 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:28.564 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:28.564 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:28.564 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:28.564 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:28.564 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:28.564 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:28.564 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.564 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.133 nvme0n1 00:26:29.133 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.133 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:29.133 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.133 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:29.133 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.133 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.133 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:29.133 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:29.133 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.133 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.133 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.133 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:29.133 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:26:29.133 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:29.133 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:29.133 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:29.133 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:29.133 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmU1ZjQ3Yjc4MWZjODdmYjEwYzI0YWQxODIxNzZmNmI5ODEzZWYxM2VjMTZkNWMwDu5arA==: 00:26:29.133 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjJiNDg1ZTI5NTYzOTYyOTU0MDRhODI5YWRlM2E4NzWcjrTQ: 00:26:29.133 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:29.133 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:29.133 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmU1ZjQ3Yjc4MWZjODdmYjEwYzI0YWQxODIxNzZmNmI5ODEzZWYxM2VjMTZkNWMwDu5arA==: 00:26:29.133 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjJiNDg1ZTI5NTYzOTYyOTU0MDRhODI5YWRlM2E4NzWcjrTQ: ]] 00:26:29.133 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjJiNDg1ZTI5NTYzOTYyOTU0MDRhODI5YWRlM2E4NzWcjrTQ: 00:26:29.133 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:26:29.133 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:29.133 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:29.133 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:29.133 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:29.133 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:29.133 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:29.133 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.133 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.133 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.133 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:29.133 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:29.133 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:29.133 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:29.133 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:29.133 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:29.133 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:29.133 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:29.133 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:29.133 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:29.133 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:29.133 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:29.133 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.133 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.702 nvme0n1 00:26:29.702 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.702 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:29.702 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.702 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:29.702 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.702 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.702 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:29.702 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:29.702 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.702 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.702 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.702 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:29.702 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:26:29.702 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:29.702 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:29.702 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:29.702 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:29.702 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzAxNGNjMjIzOTRjZmE5MzlhMTBjNjQ1ZGM1NjU2YzE0NDc0Y2YxZjFhYTdmZmVmNGI4NmYxODA5YzI2NjU1OFRqrsw=: 00:26:29.702 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:29.702 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:29.702 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:29.702 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzAxNGNjMjIzOTRjZmE5MzlhMTBjNjQ1ZGM1NjU2YzE0NDc0Y2YxZjFhYTdmZmVmNGI4NmYxODA5YzI2NjU1OFRqrsw=: 00:26:29.702 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:29.702 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:26:29.702 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:29.702 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:29.702 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:29.702 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:29.702 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:29.702 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:29.702 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.702 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.702 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.702 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:29.702 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:29.702 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:29.702 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:29.702 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:29.702 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:29.702 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:29.702 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:29.702 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:29.702 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:29.702 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:29.702 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:29.702 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.702 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.271 nvme0n1 00:26:30.271 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.271 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:30.271 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.271 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:30.271 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.271 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.271 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:30.271 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:30.271 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.271 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.271 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.271 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:30.271 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:30.271 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:30.271 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:30.271 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:30.271 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmFlMmQwODQ4OGM5YWNjYjMzMzI0YjRmN2Y1YjMxNTliM2Q3M2EyYzdiMmI1NDU10+GMbA==: 00:26:30.271 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yzg2YTU4ZThkOTMzY2JkNWI3MmNlZDVhMDQ5YWI2ZTFlMDc2ZjA3YzBiYmVhOTBiT+gSUg==: 00:26:30.271 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:30.271 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:30.271 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmFlMmQwODQ4OGM5YWNjYjMzMzI0YjRmN2Y1YjMxNTliM2Q3M2EyYzdiMmI1NDU10+GMbA==: 00:26:30.271 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yzg2YTU4ZThkOTMzY2JkNWI3MmNlZDVhMDQ5YWI2ZTFlMDc2ZjA3YzBiYmVhOTBiT+gSUg==: ]] 00:26:30.271 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yzg2YTU4ZThkOTMzY2JkNWI3MmNlZDVhMDQ5YWI2ZTFlMDc2ZjA3YzBiYmVhOTBiT+gSUg==: 00:26:30.271 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:30.271 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.271 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.271 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.271 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:26:30.271 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:30.271 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:30.271 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:30.271 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:30.271 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:30.271 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:30.271 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:30.271 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:30.271 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:30.271 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:30.271 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:30.271 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:26:30.271 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:30.271 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:30.271 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:30.271 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:30.271 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:30.271 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:30.271 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.271 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.271 request: 00:26:30.271 { 00:26:30.271 "name": "nvme0", 00:26:30.271 "trtype": "tcp", 00:26:30.271 "traddr": "10.0.0.1", 00:26:30.271 "adrfam": "ipv4", 00:26:30.271 "trsvcid": "4420", 00:26:30.271 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:30.271 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:30.271 "prchk_reftag": false, 00:26:30.271 "prchk_guard": false, 00:26:30.271 "hdgst": false, 00:26:30.271 "ddgst": false, 00:26:30.271 "allow_unrecognized_csi": false, 00:26:30.271 "method": "bdev_nvme_attach_controller", 00:26:30.271 "req_id": 1 00:26:30.271 } 00:26:30.271 Got JSON-RPC error response 00:26:30.271 response: 00:26:30.271 { 00:26:30.271 "code": -5, 00:26:30.271 "message": "Input/output error" 00:26:30.271 } 00:26:30.271 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:30.271 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:26:30.271 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:30.271 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:30.271 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:30.271 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:26:30.271 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.271 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.271 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:26:30.271 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.271 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:26:30.271 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:26:30.271 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:30.271 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:30.271 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:30.271 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:30.271 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:30.271 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:30.271 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:30.271 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:30.271 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:30.272 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:30.272 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:30.272 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:26:30.272 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:30.272 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:30.272 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:30.272 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:30.272 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:30.272 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:30.272 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.272 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.272 request: 00:26:30.272 { 00:26:30.272 "name": "nvme0", 00:26:30.272 "trtype": "tcp", 00:26:30.272 "traddr": "10.0.0.1", 00:26:30.272 "adrfam": "ipv4", 00:26:30.272 "trsvcid": "4420", 00:26:30.272 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:30.272 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:30.272 "prchk_reftag": false, 00:26:30.272 "prchk_guard": false, 00:26:30.272 "hdgst": false, 00:26:30.272 "ddgst": false, 00:26:30.272 "dhchap_key": "key2", 00:26:30.272 "allow_unrecognized_csi": false, 00:26:30.272 "method": "bdev_nvme_attach_controller", 00:26:30.272 "req_id": 1 00:26:30.272 } 00:26:30.272 Got JSON-RPC error response 00:26:30.272 response: 00:26:30.272 { 00:26:30.272 "code": -5, 00:26:30.272 "message": "Input/output error" 00:26:30.272 } 00:26:30.272 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:30.272 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:26:30.272 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:30.272 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:30.272 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:30.531 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:26:30.531 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.531 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:26:30.531 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.532 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.532 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:26:30.532 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:26:30.532 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:30.532 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:30.532 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:30.532 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:30.532 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:30.532 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:30.532 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:30.532 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:30.532 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:30.532 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:30.532 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:30.532 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:26:30.532 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:30.532 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:30.532 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:30.532 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:30.532 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:30.532 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:30.532 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.532 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.532 request: 00:26:30.532 { 00:26:30.532 "name": "nvme0", 00:26:30.532 "trtype": "tcp", 00:26:30.532 "traddr": "10.0.0.1", 00:26:30.532 "adrfam": "ipv4", 00:26:30.532 "trsvcid": "4420", 00:26:30.532 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:30.532 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:30.532 "prchk_reftag": false, 00:26:30.532 "prchk_guard": false, 00:26:30.532 "hdgst": false, 00:26:30.532 "ddgst": false, 00:26:30.532 "dhchap_key": "key1", 00:26:30.532 "dhchap_ctrlr_key": "ckey2", 00:26:30.532 "allow_unrecognized_csi": false, 00:26:30.532 "method": "bdev_nvme_attach_controller", 00:26:30.532 "req_id": 1 00:26:30.532 } 00:26:30.532 Got JSON-RPC error response 00:26:30.532 response: 00:26:30.532 { 00:26:30.532 "code": -5, 00:26:30.532 "message": "Input/output error" 00:26:30.532 } 00:26:30.532 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:30.532 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:26:30.532 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:30.532 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:30.532 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:30.532 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:26:30.532 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:30.532 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:30.532 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:30.532 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:30.532 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:30.532 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:30.532 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:30.532 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:30.532 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:30.532 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:30.532 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:26:30.532 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.532 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.532 nvme0n1 00:26:30.532 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.532 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:30.532 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:30.532 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:30.532 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:30.532 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:30.532 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2I4YjhmZGJmMDc2ZGZiMTZjZDMzMWQ1M2Y3NTYwMmK56u8u: 00:26:30.532 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTdiZmI5YTA3ZDY2YjMxODllYjhlYTMwMjViMThmNjhJ7IzP: 00:26:30.532 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:30.532 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:30.532 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2I4YjhmZGJmMDc2ZGZiMTZjZDMzMWQ1M2Y3NTYwMmK56u8u: 00:26:30.532 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTdiZmI5YTA3ZDY2YjMxODllYjhlYTMwMjViMThmNjhJ7IzP: ]] 00:26:30.532 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTdiZmI5YTA3ZDY2YjMxODllYjhlYTMwMjViMThmNjhJ7IzP: 00:26:30.532 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:30.532 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.532 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.532 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.532 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:26:30.532 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:26:30.532 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.532 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.532 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.792 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:30.792 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:30.792 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:26:30.792 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:30.792 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:30.792 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:30.792 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:30.792 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:30.792 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:30.792 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.792 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.792 request: 00:26:30.792 { 00:26:30.792 "name": "nvme0", 00:26:30.792 "dhchap_key": "key1", 00:26:30.792 "dhchap_ctrlr_key": "ckey2", 00:26:30.792 "method": "bdev_nvme_set_keys", 00:26:30.792 "req_id": 1 00:26:30.792 } 00:26:30.792 Got JSON-RPC error response 00:26:30.792 response: 00:26:30.792 { 00:26:30.792 "code": -13, 00:26:30.792 "message": "Permission denied" 00:26:30.792 } 00:26:30.792 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:30.792 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:26:30.792 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:30.792 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:30.792 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:30.792 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:26:30.792 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:26:30.792 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.792 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.792 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.792 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:26:30.792 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:26:31.729 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:26:31.730 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.730 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:26:31.730 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.730 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.730 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:26:31.730 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:31.730 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:31.730 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:31.730 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:31.730 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:31.730 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmFlMmQwODQ4OGM5YWNjYjMzMzI0YjRmN2Y1YjMxNTliM2Q3M2EyYzdiMmI1NDU10+GMbA==: 00:26:31.730 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yzg2YTU4ZThkOTMzY2JkNWI3MmNlZDVhMDQ5YWI2ZTFlMDc2ZjA3YzBiYmVhOTBiT+gSUg==: 00:26:31.730 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:31.730 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:31.730 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmFlMmQwODQ4OGM5YWNjYjMzMzI0YjRmN2Y1YjMxNTliM2Q3M2EyYzdiMmI1NDU10+GMbA==: 00:26:31.730 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yzg2YTU4ZThkOTMzY2JkNWI3MmNlZDVhMDQ5YWI2ZTFlMDc2ZjA3YzBiYmVhOTBiT+gSUg==: ]] 00:26:31.730 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yzg2YTU4ZThkOTMzY2JkNWI3MmNlZDVhMDQ5YWI2ZTFlMDc2ZjA3YzBiYmVhOTBiT+gSUg==: 00:26:31.730 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:26:31.730 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:31.730 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:31.730 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:31.730 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:31.730 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:31.730 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:31.730 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:31.730 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:31.730 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:31.730 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:31.730 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:26:31.730 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.730 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.989 nvme0n1 00:26:31.989 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.989 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:31.989 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:31.989 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:31.989 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:31.989 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:31.989 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2I4YjhmZGJmMDc2ZGZiMTZjZDMzMWQ1M2Y3NTYwMmK56u8u: 00:26:31.989 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTdiZmI5YTA3ZDY2YjMxODllYjhlYTMwMjViMThmNjhJ7IzP: 00:26:31.989 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:31.989 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:31.989 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2I4YjhmZGJmMDc2ZGZiMTZjZDMzMWQ1M2Y3NTYwMmK56u8u: 00:26:31.989 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTdiZmI5YTA3ZDY2YjMxODllYjhlYTMwMjViMThmNjhJ7IzP: ]] 00:26:31.989 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTdiZmI5YTA3ZDY2YjMxODllYjhlYTMwMjViMThmNjhJ7IzP: 00:26:31.989 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:26:31.989 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:26:31.989 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:26:31.989 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:31.989 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:31.989 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:31.989 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:31.989 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:26:31.989 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.989 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.989 request: 00:26:31.989 { 00:26:31.989 "name": "nvme0", 00:26:31.989 "dhchap_key": "key2", 00:26:31.989 "dhchap_ctrlr_key": "ckey1", 00:26:31.989 "method": "bdev_nvme_set_keys", 00:26:31.989 "req_id": 1 00:26:31.989 } 00:26:31.989 Got JSON-RPC error response 00:26:31.989 response: 00:26:31.989 { 00:26:31.989 "code": -13, 00:26:31.989 "message": "Permission denied" 00:26:31.989 } 00:26:31.989 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:31.989 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:26:31.989 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:31.989 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:31.989 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:31.989 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:26:31.989 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:26:31.989 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.989 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.989 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.989 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:26:31.989 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:26:32.926 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:26:32.926 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:26:32.926 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.926 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.926 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.184 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:26:33.184 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:26:33.184 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:26:33.184 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:26:33.184 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:33.184 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:26:33.184 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:33.184 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:26:33.184 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:33.184 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:33.184 rmmod nvme_tcp 00:26:33.184 rmmod nvme_fabrics 00:26:33.184 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:33.184 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:26:33.184 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:26:33.184 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 85241 ']' 00:26:33.184 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 85241 00:26:33.184 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@952 -- # '[' -z 85241 ']' 00:26:33.184 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # kill -0 85241 00:26:33.184 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # uname 00:26:33.184 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:33.184 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 85241 00:26:33.184 killing process with pid 85241 00:26:33.184 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:33.184 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:33.184 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 85241' 00:26:33.184 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@971 -- # kill 85241 00:26:33.184 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@976 -- # wait 85241 00:26:34.563 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:34.563 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:34.563 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:34.563 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:26:34.563 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:26:34.563 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:26:34.563 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:34.563 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:34.563 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:26:34.563 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:26:34.563 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:26:34.563 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:26:34.563 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:26:34.563 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:26:34.563 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:26:34.563 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:26:34.563 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:26:34.563 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:26:34.563 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:26:34.563 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:26:34.563 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:34.563 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:34.563 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:26:34.563 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:34.563 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:34.563 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:34.563 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 00:26:34.563 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:34.563 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:34.563 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:26:34.563 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:26:34.563 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:26:34.823 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:34.823 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:34.823 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:34.823 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:34.823 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:26:34.823 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:26:34.823 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:26:35.768 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:35.768 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:26:35.768 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:26:35.768 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.7qm /tmp/spdk.key-null.iLp /tmp/spdk.key-sha256.c7X /tmp/spdk.key-sha384.6JN /tmp/spdk.key-sha512.kmK /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:26:35.768 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:26:36.383 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:36.383 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:26:36.383 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:26:36.383 00:26:36.383 real 0m37.356s 00:26:36.383 user 0m33.982s 00:26:36.383 sys 0m5.716s 00:26:36.383 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:36.383 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.383 ************************************ 00:26:36.383 END TEST nvmf_auth_host 00:26:36.383 ************************************ 00:26:36.642 14:31:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:26:36.642 14:31:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:36.642 14:31:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:26:36.642 14:31:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:36.642 14:31:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.642 ************************************ 00:26:36.642 START TEST nvmf_digest 00:26:36.642 ************************************ 00:26:36.642 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:36.642 * Looking for test storage... 00:26:36.642 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:36.642 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:36.642 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lcov --version 00:26:36.642 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:36.642 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:36.642 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:36.642 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:36.642 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:36.642 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:26:36.642 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:26:36.642 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:26:36.642 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:26:36.642 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:26:36.642 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:26:36.642 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:26:36.642 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:36.642 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:26:36.642 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:26:36.642 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:36.642 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:36.642 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:26:36.642 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:26:36.642 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:36.642 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:26:36.642 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:26:36.642 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:26:36.902 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:26:36.902 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:36.902 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:26:36.902 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:26:36.902 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:36.902 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:36.902 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:26:36.902 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:36.902 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:36.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:36.902 --rc genhtml_branch_coverage=1 00:26:36.902 --rc genhtml_function_coverage=1 00:26:36.902 --rc genhtml_legend=1 00:26:36.902 --rc geninfo_all_blocks=1 00:26:36.902 --rc geninfo_unexecuted_blocks=1 00:26:36.902 00:26:36.902 ' 00:26:36.902 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:36.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:36.902 --rc genhtml_branch_coverage=1 00:26:36.902 --rc genhtml_function_coverage=1 00:26:36.902 --rc genhtml_legend=1 00:26:36.902 --rc geninfo_all_blocks=1 00:26:36.902 --rc geninfo_unexecuted_blocks=1 00:26:36.902 00:26:36.902 ' 00:26:36.902 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:36.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:36.902 --rc genhtml_branch_coverage=1 00:26:36.902 --rc genhtml_function_coverage=1 00:26:36.902 --rc genhtml_legend=1 00:26:36.902 --rc geninfo_all_blocks=1 00:26:36.902 --rc geninfo_unexecuted_blocks=1 00:26:36.902 00:26:36.902 ' 00:26:36.902 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:36.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:36.902 --rc genhtml_branch_coverage=1 00:26:36.902 --rc genhtml_function_coverage=1 00:26:36.902 --rc genhtml_legend=1 00:26:36.902 --rc geninfo_all_blocks=1 00:26:36.902 --rc geninfo_unexecuted_blocks=1 00:26:36.902 00:26:36.902 ' 00:26:36.902 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:36.902 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:26:36.902 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:36.902 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:36.902 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:36.902 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:36.902 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:36.902 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:36.902 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:36.902 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:36.902 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:36.902 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:36.902 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:26:36.902 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:26:36.902 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:36.902 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:36.902 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:36.902 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:36.902 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:36.902 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:26:36.902 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:36.902 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:36.902 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:36.902 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:36.902 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:36.902 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:36.902 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:26:36.902 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:36.902 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:26:36.902 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:36.902 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:36.902 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:36.902 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:36.902 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:36.902 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:36.902 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:36.902 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:36.902 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:36.902 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:36.902 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:26:36.902 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:26:36.902 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:26:36.902 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:26:36.902 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:26:36.902 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:36.902 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:36.902 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:36.902 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:36.902 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:36.902 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:36.902 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:36.902 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:36.902 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:26:36.902 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:26:36.902 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:26:36.902 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:26:36.902 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:26:36.902 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@460 -- # nvmf_veth_init 00:26:36.902 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:36.902 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:26:36.903 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:26:36.903 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:26:36.903 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:36.903 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:26:36.903 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:36.903 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:26:36.903 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:36.903 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:26:36.903 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:36.903 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:36.903 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:36.903 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:36.903 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:36.903 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:36.903 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:26:36.903 Cannot find device "nvmf_init_br" 00:26:36.903 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:26:36.903 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:26:36.903 Cannot find device "nvmf_init_br2" 00:26:36.903 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:26:36.903 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:26:36.903 Cannot find device "nvmf_tgt_br" 00:26:36.903 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 00:26:36.903 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:26:36.903 Cannot find device "nvmf_tgt_br2" 00:26:36.903 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 00:26:36.903 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:26:36.903 Cannot find device "nvmf_init_br" 00:26:36.903 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 00:26:36.903 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:26:36.903 Cannot find device "nvmf_init_br2" 00:26:36.903 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 00:26:36.903 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:26:36.903 Cannot find device "nvmf_tgt_br" 00:26:36.903 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 00:26:36.903 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:26:36.903 Cannot find device "nvmf_tgt_br2" 00:26:36.903 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 00:26:36.903 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:26:36.903 Cannot find device "nvmf_br" 00:26:36.903 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 00:26:36.903 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:26:36.903 Cannot find device "nvmf_init_if" 00:26:36.903 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 00:26:36.903 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:26:36.903 Cannot find device "nvmf_init_if2" 00:26:36.903 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 00:26:36.903 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:36.903 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:36.903 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 00:26:36.903 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:37.162 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:37.162 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 00:26:37.162 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:26:37.162 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:37.162 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:26:37.162 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:37.162 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:37.162 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:37.162 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:37.162 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:37.162 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:26:37.162 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:26:37.162 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:26:37.162 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:26:37.162 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:26:37.162 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:26:37.162 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:26:37.162 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:26:37.162 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:26:37.162 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:37.162 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:37.162 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:37.162 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:26:37.162 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:26:37.162 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:26:37.162 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:26:37.162 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:37.162 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:37.422 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:37.422 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:26:37.422 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:26:37.422 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:26:37.422 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:37.422 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:26:37.422 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:26:37.422 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:37.422 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.112 ms 00:26:37.422 00:26:37.422 --- 10.0.0.3 ping statistics --- 00:26:37.422 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:37.422 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:26:37.422 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:26:37.422 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:26:37.422 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.087 ms 00:26:37.422 00:26:37.422 --- 10.0.0.4 ping statistics --- 00:26:37.422 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:37.422 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:26:37.422 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:37.422 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:37.422 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:26:37.422 00:26:37.422 --- 10.0.0.1 ping statistics --- 00:26:37.422 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:37.422 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:26:37.422 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:26:37.422 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:37.422 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:26:37.422 00:26:37.422 --- 10.0.0.2 ping statistics --- 00:26:37.422 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:37.422 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:26:37.422 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:37.422 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@461 -- # return 0 00:26:37.422 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:37.422 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:37.422 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:37.422 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:37.422 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:37.422 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:37.422 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:37.422 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:37.422 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:26:37.422 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:26:37.422 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:26:37.422 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:37.422 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:37.422 ************************************ 00:26:37.422 START TEST nvmf_digest_clean 00:26:37.422 ************************************ 00:26:37.422 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1127 -- # run_digest 00:26:37.422 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:26:37.422 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:26:37.422 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:26:37.422 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:26:37.422 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:26:37.422 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:37.422 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:37.422 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:37.422 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=86886 00:26:37.422 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:37.422 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 86886 00:26:37.422 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 86886 ']' 00:26:37.422 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:37.422 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:37.422 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:37.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:37.422 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:37.422 14:31:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:37.422 [2024-11-06 14:31:05.019936] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:26:37.422 [2024-11-06 14:31:05.020070] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:37.682 [2024-11-06 14:31:05.196165] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:37.941 [2024-11-06 14:31:05.332825] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:37.941 [2024-11-06 14:31:05.332894] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:37.941 [2024-11-06 14:31:05.332911] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:37.941 [2024-11-06 14:31:05.332932] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:37.941 [2024-11-06 14:31:05.332945] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:37.941 [2024-11-06 14:31:05.334244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:38.509 14:31:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:38.509 14:31:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:26:38.509 14:31:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:38.509 14:31:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:38.509 14:31:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:38.509 14:31:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:38.509 14:31:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:26:38.509 14:31:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:26:38.509 14:31:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:26:38.509 14:31:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.509 14:31:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:38.769 [2024-11-06 14:31:06.156136] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:26:38.769 null0 00:26:38.769 [2024-11-06 14:31:06.311124] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:38.769 [2024-11-06 14:31:06.335273] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:38.769 14:31:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.769 14:31:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:26:38.769 14:31:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:38.769 14:31:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:38.769 14:31:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:38.769 14:31:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:38.769 14:31:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:38.769 14:31:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:38.769 14:31:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=86918 00:26:38.769 14:31:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 86918 /var/tmp/bperf.sock 00:26:38.769 14:31:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:38.769 14:31:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 86918 ']' 00:26:38.769 14:31:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:38.769 14:31:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:38.769 14:31:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:38.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:38.769 14:31:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:38.769 14:31:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:39.028 [2024-11-06 14:31:06.444640] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:26:39.028 [2024-11-06 14:31:06.444969] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86918 ] 00:26:39.028 [2024-11-06 14:31:06.629153] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:39.287 [2024-11-06 14:31:06.808478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:39.885 14:31:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:39.885 14:31:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:26:39.885 14:31:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:39.885 14:31:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:39.885 14:31:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:40.147 [2024-11-06 14:31:07.698938] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:26:40.405 14:31:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:40.405 14:31:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:40.663 nvme0n1 00:26:40.663 14:31:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:40.663 14:31:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:40.663 Running I/O for 2 seconds... 00:26:42.976 16383.00 IOPS, 64.00 MiB/s [2024-11-06T14:31:10.611Z] 16510.00 IOPS, 64.49 MiB/s 00:26:42.976 Latency(us) 00:26:42.976 [2024-11-06T14:31:10.611Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:42.976 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:42.976 nvme0n1 : 2.01 16495.90 64.44 0.00 0.00 7754.67 7422.15 20424.07 00:26:42.976 [2024-11-06T14:31:10.611Z] =================================================================================================================== 00:26:42.976 [2024-11-06T14:31:10.611Z] Total : 16495.90 64.44 0.00 0.00 7754.67 7422.15 20424.07 00:26:42.976 { 00:26:42.976 "results": [ 00:26:42.976 { 00:26:42.976 "job": "nvme0n1", 00:26:42.976 "core_mask": "0x2", 00:26:42.976 "workload": "randread", 00:26:42.976 "status": "finished", 00:26:42.976 "queue_depth": 128, 00:26:42.976 "io_size": 4096, 00:26:42.976 "runtime": 2.009469, 00:26:42.976 "iops": 16495.900160689216, 00:26:42.976 "mibps": 64.43711000269225, 00:26:42.976 "io_failed": 0, 00:26:42.976 "io_timeout": 0, 00:26:42.976 "avg_latency_us": 7754.6739398767995, 00:26:42.976 "min_latency_us": 7422.149397590361, 00:26:42.976 "max_latency_us": 20424.070682730922 00:26:42.976 } 00:26:42.976 ], 00:26:42.976 "core_count": 1 00:26:42.976 } 00:26:42.976 14:31:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:42.976 14:31:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:42.976 14:31:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:42.976 14:31:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:42.976 | select(.opcode=="crc32c") 00:26:42.976 | "\(.module_name) \(.executed)"' 00:26:42.976 14:31:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:42.976 14:31:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:42.976 14:31:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:42.976 14:31:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:42.976 14:31:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:42.976 14:31:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 86918 00:26:42.976 14:31:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 86918 ']' 00:26:42.976 14:31:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 86918 00:26:42.976 14:31:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:26:42.976 14:31:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:42.976 14:31:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 86918 00:26:42.976 14:31:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:42.976 14:31:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:42.976 killing process with pid 86918 00:26:42.976 14:31:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 86918' 00:26:42.976 Received shutdown signal, test time was about 2.000000 seconds 00:26:42.976 00:26:42.976 Latency(us) 00:26:42.976 [2024-11-06T14:31:10.611Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:42.976 [2024-11-06T14:31:10.611Z] =================================================================================================================== 00:26:42.976 [2024-11-06T14:31:10.611Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:42.976 14:31:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 86918 00:26:42.976 14:31:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 86918 00:26:44.354 14:31:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:26:44.354 14:31:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:44.354 14:31:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:44.354 14:31:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:44.354 14:31:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:44.354 14:31:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:44.354 14:31:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:44.354 14:31:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=86990 00:26:44.354 14:31:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:44.354 14:31:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 86990 /var/tmp/bperf.sock 00:26:44.354 14:31:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 86990 ']' 00:26:44.354 14:31:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:44.354 14:31:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:44.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:44.354 14:31:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:44.354 14:31:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:44.354 14:31:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:44.354 [2024-11-06 14:31:11.725786] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:26:44.354 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:44.354 Zero copy mechanism will not be used. 00:26:44.354 [2024-11-06 14:31:11.725953] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86990 ] 00:26:44.354 [2024-11-06 14:31:11.910259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:44.613 [2024-11-06 14:31:12.054212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:45.221 14:31:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:45.221 14:31:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:26:45.221 14:31:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:45.221 14:31:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:45.221 14:31:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:45.479 [2024-11-06 14:31:12.991143] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:26:45.738 14:31:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:45.738 14:31:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:45.996 nvme0n1 00:26:45.996 14:31:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:45.996 14:31:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:45.996 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:45.996 Zero copy mechanism will not be used. 00:26:45.996 Running I/O for 2 seconds... 00:26:48.312 7024.00 IOPS, 878.00 MiB/s [2024-11-06T14:31:15.947Z] 6992.00 IOPS, 874.00 MiB/s 00:26:48.312 Latency(us) 00:26:48.312 [2024-11-06T14:31:15.947Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:48.312 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:48.312 nvme0n1 : 2.00 6992.08 874.01 0.00 0.00 2285.38 2131.89 4579.62 00:26:48.312 [2024-11-06T14:31:15.947Z] =================================================================================================================== 00:26:48.312 [2024-11-06T14:31:15.947Z] Total : 6992.08 874.01 0.00 0.00 2285.38 2131.89 4579.62 00:26:48.312 { 00:26:48.312 "results": [ 00:26:48.312 { 00:26:48.312 "job": "nvme0n1", 00:26:48.312 "core_mask": "0x2", 00:26:48.312 "workload": "randread", 00:26:48.312 "status": "finished", 00:26:48.312 "queue_depth": 16, 00:26:48.312 "io_size": 131072, 00:26:48.312 "runtime": 2.002264, 00:26:48.312 "iops": 6992.084959825477, 00:26:48.312 "mibps": 874.0106199781846, 00:26:48.312 "io_failed": 0, 00:26:48.312 "io_timeout": 0, 00:26:48.312 "avg_latency_us": 2285.3800022948935, 00:26:48.312 "min_latency_us": 2131.8939759036143, 00:26:48.312 "max_latency_us": 4579.6240963855425 00:26:48.312 } 00:26:48.312 ], 00:26:48.312 "core_count": 1 00:26:48.312 } 00:26:48.312 14:31:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:48.312 14:31:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:48.313 14:31:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:48.313 14:31:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:48.313 | select(.opcode=="crc32c") 00:26:48.313 | "\(.module_name) \(.executed)"' 00:26:48.313 14:31:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:48.313 14:31:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:48.313 14:31:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:48.313 14:31:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:48.313 14:31:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:48.313 14:31:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 86990 00:26:48.313 14:31:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 86990 ']' 00:26:48.313 14:31:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 86990 00:26:48.313 14:31:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:26:48.313 14:31:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:48.313 14:31:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 86990 00:26:48.313 killing process with pid 86990 00:26:48.313 Received shutdown signal, test time was about 2.000000 seconds 00:26:48.313 00:26:48.313 Latency(us) 00:26:48.313 [2024-11-06T14:31:15.948Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:48.313 [2024-11-06T14:31:15.948Z] =================================================================================================================== 00:26:48.313 [2024-11-06T14:31:15.948Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:48.313 14:31:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:48.313 14:31:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:48.313 14:31:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 86990' 00:26:48.313 14:31:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 86990 00:26:48.313 14:31:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 86990 00:26:49.690 14:31:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:26:49.690 14:31:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:49.690 14:31:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:49.690 14:31:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:49.690 14:31:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:49.690 14:31:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:49.690 14:31:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:49.690 14:31:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=87065 00:26:49.691 14:31:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:49.691 14:31:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 87065 /var/tmp/bperf.sock 00:26:49.691 14:31:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 87065 ']' 00:26:49.691 14:31:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:49.691 14:31:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:49.691 14:31:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:49.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:49.691 14:31:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:49.691 14:31:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:49.691 [2024-11-06 14:31:17.164445] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:26:49.691 [2024-11-06 14:31:17.164774] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87065 ] 00:26:49.949 [2024-11-06 14:31:17.347464] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:49.949 [2024-11-06 14:31:17.497280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:50.549 14:31:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:50.549 14:31:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:26:50.549 14:31:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:50.549 14:31:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:50.549 14:31:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:51.117 [2024-11-06 14:31:18.442867] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:26:51.117 14:31:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:51.117 14:31:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:51.377 nvme0n1 00:26:51.377 14:31:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:51.377 14:31:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:51.377 Running I/O for 2 seconds... 00:26:53.692 17654.00 IOPS, 68.96 MiB/s [2024-11-06T14:31:21.327Z] 17653.50 IOPS, 68.96 MiB/s 00:26:53.692 Latency(us) 00:26:53.692 [2024-11-06T14:31:21.327Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:53.692 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:53.692 nvme0n1 : 2.00 17677.13 69.05 0.00 0.00 7235.48 6685.20 14002.07 00:26:53.692 [2024-11-06T14:31:21.327Z] =================================================================================================================== 00:26:53.692 [2024-11-06T14:31:21.327Z] Total : 17677.13 69.05 0.00 0.00 7235.48 6685.20 14002.07 00:26:53.692 { 00:26:53.692 "results": [ 00:26:53.692 { 00:26:53.692 "job": "nvme0n1", 00:26:53.692 "core_mask": "0x2", 00:26:53.692 "workload": "randwrite", 00:26:53.692 "status": "finished", 00:26:53.692 "queue_depth": 128, 00:26:53.692 "io_size": 4096, 00:26:53.692 "runtime": 2.004568, 00:26:53.692 "iops": 17677.125445482518, 00:26:53.692 "mibps": 69.05127127141608, 00:26:53.692 "io_failed": 0, 00:26:53.692 "io_timeout": 0, 00:26:53.692 "avg_latency_us": 7235.484680304397, 00:26:53.692 "min_latency_us": 6685.198393574297, 00:26:53.692 "max_latency_us": 14002.06907630522 00:26:53.692 } 00:26:53.692 ], 00:26:53.692 "core_count": 1 00:26:53.692 } 00:26:53.692 14:31:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:53.692 14:31:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:53.692 14:31:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:53.692 14:31:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:53.692 | select(.opcode=="crc32c") 00:26:53.692 | "\(.module_name) \(.executed)"' 00:26:53.692 14:31:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:53.692 14:31:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:53.692 14:31:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:53.692 14:31:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:53.692 14:31:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:53.692 14:31:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 87065 00:26:53.692 14:31:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 87065 ']' 00:26:53.692 14:31:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 87065 00:26:53.692 14:31:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:26:53.692 14:31:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:53.692 14:31:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 87065 00:26:53.692 14:31:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:53.692 14:31:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:53.692 14:31:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 87065' 00:26:53.692 killing process with pid 87065 00:26:53.692 Received shutdown signal, test time was about 2.000000 seconds 00:26:53.692 00:26:53.692 Latency(us) 00:26:53.692 [2024-11-06T14:31:21.327Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:53.692 [2024-11-06T14:31:21.327Z] =================================================================================================================== 00:26:53.692 [2024-11-06T14:31:21.327Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:53.692 14:31:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 87065 00:26:53.692 14:31:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 87065 00:26:55.071 14:31:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:26:55.071 14:31:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:55.071 14:31:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:55.071 14:31:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:55.071 14:31:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:55.071 14:31:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:55.071 14:31:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:55.071 14:31:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=87132 00:26:55.071 14:31:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:55.071 14:31:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 87132 /var/tmp/bperf.sock 00:26:55.071 14:31:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 87132 ']' 00:26:55.071 14:31:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:55.071 14:31:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:55.071 14:31:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:55.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:55.072 14:31:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:55.072 14:31:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:55.072 [2024-11-06 14:31:22.476549] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:26:55.072 [2024-11-06 14:31:22.476939] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6I/O size of 131072 is greater than zero copy threshold (65536). 00:26:55.072 Zero copy mechanism will not be used. 00:26:55.072 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87132 ] 00:26:55.072 [2024-11-06 14:31:22.660767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:55.331 [2024-11-06 14:31:22.803934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:55.902 14:31:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:55.902 14:31:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:26:55.902 14:31:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:55.902 14:31:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:55.902 14:31:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:56.161 [2024-11-06 14:31:23.741049] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:26:56.420 14:31:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:56.420 14:31:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:56.679 nvme0n1 00:26:56.679 14:31:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:56.679 14:31:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:56.938 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:56.938 Zero copy mechanism will not be used. 00:26:56.938 Running I/O for 2 seconds... 00:26:58.814 6775.00 IOPS, 846.88 MiB/s [2024-11-06T14:31:26.449Z] 6801.00 IOPS, 850.12 MiB/s 00:26:58.814 Latency(us) 00:26:58.814 [2024-11-06T14:31:26.449Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:58.814 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:58.814 nvme0n1 : 2.00 6796.59 849.57 0.00 0.00 2349.71 1631.82 4474.35 00:26:58.814 [2024-11-06T14:31:26.449Z] =================================================================================================================== 00:26:58.814 [2024-11-06T14:31:26.449Z] Total : 6796.59 849.57 0.00 0.00 2349.71 1631.82 4474.35 00:26:58.814 { 00:26:58.814 "results": [ 00:26:58.814 { 00:26:58.814 "job": "nvme0n1", 00:26:58.814 "core_mask": "0x2", 00:26:58.814 "workload": "randwrite", 00:26:58.814 "status": "finished", 00:26:58.814 "queue_depth": 16, 00:26:58.814 "io_size": 131072, 00:26:58.814 "runtime": 2.003651, 00:26:58.814 "iops": 6796.592819807442, 00:26:58.814 "mibps": 849.5741024759302, 00:26:58.814 "io_failed": 0, 00:26:58.814 "io_timeout": 0, 00:26:58.814 "avg_latency_us": 2349.706808317128, 00:26:58.814 "min_latency_us": 1631.820080321285, 00:26:58.814 "max_latency_us": 4474.345381526105 00:26:58.814 } 00:26:58.814 ], 00:26:58.814 "core_count": 1 00:26:58.814 } 00:26:58.814 14:31:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:58.814 14:31:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:58.814 14:31:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:58.814 14:31:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:58.814 | select(.opcode=="crc32c") 00:26:58.814 | "\(.module_name) \(.executed)"' 00:26:58.814 14:31:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:59.073 14:31:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:59.073 14:31:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:59.073 14:31:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:59.073 14:31:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:59.073 14:31:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 87132 00:26:59.073 14:31:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 87132 ']' 00:26:59.073 14:31:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 87132 00:26:59.073 14:31:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:26:59.073 14:31:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:59.073 14:31:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 87132 00:26:59.073 killing process with pid 87132 00:26:59.073 Received shutdown signal, test time was about 2.000000 seconds 00:26:59.073 00:26:59.073 Latency(us) 00:26:59.073 [2024-11-06T14:31:26.708Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:59.073 [2024-11-06T14:31:26.708Z] =================================================================================================================== 00:26:59.073 [2024-11-06T14:31:26.708Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:59.073 14:31:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:59.073 14:31:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:59.073 14:31:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 87132' 00:26:59.073 14:31:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 87132 00:26:59.073 14:31:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 87132 00:27:00.453 14:31:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 86886 00:27:00.453 14:31:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 86886 ']' 00:27:00.453 14:31:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 86886 00:27:00.453 14:31:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:27:00.453 14:31:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:00.453 14:31:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 86886 00:27:00.453 killing process with pid 86886 00:27:00.453 14:31:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:00.453 14:31:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:00.453 14:31:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 86886' 00:27:00.453 14:31:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 86886 00:27:00.453 14:31:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 86886 00:27:01.832 00:27:01.832 real 0m24.197s 00:27:01.832 user 0m44.375s 00:27:01.832 sys 0m5.729s 00:27:01.832 14:31:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:01.832 ************************************ 00:27:01.832 END TEST nvmf_digest_clean 00:27:01.832 ************************************ 00:27:01.832 14:31:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:01.832 14:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:27:01.832 14:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:27:01.832 14:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:01.832 14:31:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:01.832 ************************************ 00:27:01.832 START TEST nvmf_digest_error 00:27:01.832 ************************************ 00:27:01.832 14:31:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1127 -- # run_digest_error 00:27:01.832 14:31:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:27:01.832 14:31:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:01.832 14:31:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:01.832 14:31:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:01.832 14:31:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=87239 00:27:01.832 14:31:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 87239 00:27:01.832 14:31:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:01.832 14:31:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 87239 ']' 00:27:01.832 14:31:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:01.832 14:31:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:01.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:01.832 14:31:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:01.832 14:31:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:01.832 14:31:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:01.832 [2024-11-06 14:31:29.295399] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:27:01.833 [2024-11-06 14:31:29.295524] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:02.092 [2024-11-06 14:31:29.482417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:02.092 [2024-11-06 14:31:29.620525] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:02.092 [2024-11-06 14:31:29.620770] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:02.092 [2024-11-06 14:31:29.620968] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:02.092 [2024-11-06 14:31:29.621038] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:02.092 [2024-11-06 14:31:29.621072] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:02.092 [2024-11-06 14:31:29.622301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:02.663 14:31:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:02.663 14:31:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:27:02.663 14:31:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:02.663 14:31:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:02.663 14:31:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:02.663 14:31:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:02.663 14:31:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:27:02.663 14:31:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.663 14:31:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:02.663 [2024-11-06 14:31:30.174425] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:27:02.663 14:31:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.663 14:31:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:27:02.663 14:31:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:27:02.663 14:31:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.663 14:31:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:02.922 [2024-11-06 14:31:30.427947] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:27:03.180 null0 00:27:03.180 [2024-11-06 14:31:30.575794] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:03.180 [2024-11-06 14:31:30.600022] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:03.180 14:31:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.180 14:31:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:27:03.180 14:31:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:03.180 14:31:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:27:03.180 14:31:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:03.180 14:31:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:03.180 14:31:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=87277 00:27:03.181 14:31:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:27:03.181 14:31:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 87277 /var/tmp/bperf.sock 00:27:03.181 14:31:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 87277 ']' 00:27:03.181 14:31:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:03.181 14:31:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:03.181 14:31:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:03.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:03.181 14:31:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:03.181 14:31:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:03.181 [2024-11-06 14:31:30.707803] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:27:03.181 [2024-11-06 14:31:30.707941] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87277 ] 00:27:03.440 [2024-11-06 14:31:30.890301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:03.440 [2024-11-06 14:31:31.036054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:03.700 [2024-11-06 14:31:31.276179] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:27:03.959 14:31:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:03.959 14:31:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:27:03.959 14:31:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:03.959 14:31:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:04.218 14:31:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:04.218 14:31:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.218 14:31:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:04.218 14:31:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.218 14:31:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:04.218 14:31:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:04.478 nvme0n1 00:27:04.478 14:31:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:04.478 14:31:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.478 14:31:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:04.478 14:31:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.478 14:31:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:04.478 14:31:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:04.478 Running I/O for 2 seconds... 00:27:04.738 [2024-11-06 14:31:32.128113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.738 [2024-11-06 14:31:32.128179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.738 [2024-11-06 14:31:32.128201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.738 [2024-11-06 14:31:32.143628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.738 [2024-11-06 14:31:32.143682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.738 [2024-11-06 14:31:32.143703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.738 [2024-11-06 14:31:32.159159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.738 [2024-11-06 14:31:32.159214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.738 [2024-11-06 14:31:32.159231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.738 [2024-11-06 14:31:32.174633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.738 [2024-11-06 14:31:32.174689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.739 [2024-11-06 14:31:32.174707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.739 [2024-11-06 14:31:32.190114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.739 [2024-11-06 14:31:32.190162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.739 [2024-11-06 14:31:32.190182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.739 [2024-11-06 14:31:32.205561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.739 [2024-11-06 14:31:32.205735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.739 [2024-11-06 14:31:32.205761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.739 [2024-11-06 14:31:32.221159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.739 [2024-11-06 14:31:32.221213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.739 [2024-11-06 14:31:32.221229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.739 [2024-11-06 14:31:32.236564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.739 [2024-11-06 14:31:32.236740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.739 [2024-11-06 14:31:32.236761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.739 [2024-11-06 14:31:32.252139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.739 [2024-11-06 14:31:32.252187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:8928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.739 [2024-11-06 14:31:32.252207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.739 [2024-11-06 14:31:32.267590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.739 [2024-11-06 14:31:32.267758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:23961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.739 [2024-11-06 14:31:32.267784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.739 [2024-11-06 14:31:32.283187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.739 [2024-11-06 14:31:32.283238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:15479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.739 [2024-11-06 14:31:32.283253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.739 [2024-11-06 14:31:32.298606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.739 [2024-11-06 14:31:32.298776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:12036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.739 [2024-11-06 14:31:32.298797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.739 [2024-11-06 14:31:32.314195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.739 [2024-11-06 14:31:32.314240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:7195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.739 [2024-11-06 14:31:32.314274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.739 [2024-11-06 14:31:32.329626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.739 [2024-11-06 14:31:32.329786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:21313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.739 [2024-11-06 14:31:32.329815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.739 [2024-11-06 14:31:32.345220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.739 [2024-11-06 14:31:32.345272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.739 [2024-11-06 14:31:32.345288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.739 [2024-11-06 14:31:32.360476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.739 [2024-11-06 14:31:32.360655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:6866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.739 [2024-11-06 14:31:32.360675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.999 [2024-11-06 14:31:32.375966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.999 [2024-11-06 14:31:32.376008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:18799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.999 [2024-11-06 14:31:32.376028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.999 [2024-11-06 14:31:32.391234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.999 [2024-11-06 14:31:32.391393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:24770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.999 [2024-11-06 14:31:32.391417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.999 [2024-11-06 14:31:32.406598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.999 [2024-11-06 14:31:32.406755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:25099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.999 [2024-11-06 14:31:32.406774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.999 [2024-11-06 14:31:32.422074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.999 [2024-11-06 14:31:32.422122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:19721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.999 [2024-11-06 14:31:32.422138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.999 [2024-11-06 14:31:32.437450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.999 [2024-11-06 14:31:32.437604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:2923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.999 [2024-11-06 14:31:32.437631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.999 [2024-11-06 14:31:32.453022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.999 [2024-11-06 14:31:32.453064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:4614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.999 [2024-11-06 14:31:32.453083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.999 [2024-11-06 14:31:32.468455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.999 [2024-11-06 14:31:32.468619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:21819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.999 [2024-11-06 14:31:32.468639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.999 [2024-11-06 14:31:32.483984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:04.999 [2024-11-06 14:31:32.484043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:15594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.999 [2024-11-06 14:31:32.484058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.999 [2024-11-06 14:31:32.499395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:05.000 [2024-11-06 14:31:32.499554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:23375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.000 [2024-11-06 14:31:32.499579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.000 [2024-11-06 14:31:32.514715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:05.000 [2024-11-06 14:31:32.514875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:19753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.000 [2024-11-06 14:31:32.514903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.000 [2024-11-06 14:31:32.530076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:05.000 [2024-11-06 14:31:32.530243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:20629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.000 [2024-11-06 14:31:32.530263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.000 [2024-11-06 14:31:32.545473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:05.000 [2024-11-06 14:31:32.545644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:2794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.000 [2024-11-06 14:31:32.545664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.000 [2024-11-06 14:31:32.560879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:05.000 [2024-11-06 14:31:32.560923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:5385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.000 [2024-11-06 14:31:32.560941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.000 [2024-11-06 14:31:32.576236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:05.000 [2024-11-06 14:31:32.576397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:10359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.000 [2024-11-06 14:31:32.576422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.000 [2024-11-06 14:31:32.591747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:05.000 [2024-11-06 14:31:32.591913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.000 [2024-11-06 14:31:32.591942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.000 [2024-11-06 14:31:32.607196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:05.000 [2024-11-06 14:31:32.607358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.000 [2024-11-06 14:31:32.607378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.000 [2024-11-06 14:31:32.622604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:05.000 [2024-11-06 14:31:32.622756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:6457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.000 [2024-11-06 14:31:32.622781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.260 [2024-11-06 14:31:32.637912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:05.260 [2024-11-06 14:31:32.637954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:19561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.260 [2024-11-06 14:31:32.637972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.260 [2024-11-06 14:31:32.653184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:05.260 [2024-11-06 14:31:32.653359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:16786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.260 [2024-11-06 14:31:32.653379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.260 [2024-11-06 14:31:32.668671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:05.260 [2024-11-06 14:31:32.668854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:21763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.260 [2024-11-06 14:31:32.668875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.260 [2024-11-06 14:31:32.684064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:05.260 [2024-11-06 14:31:32.684107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:25393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.260 [2024-11-06 14:31:32.684125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.260 [2024-11-06 14:31:32.699387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:05.260 [2024-11-06 14:31:32.699545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:8622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.260 [2024-11-06 14:31:32.699570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.260 [2024-11-06 14:31:32.714891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:05.260 [2024-11-06 14:31:32.714944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:15701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.260 [2024-11-06 14:31:32.714960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.260 [2024-11-06 14:31:32.730018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:05.260 [2024-11-06 14:31:32.730195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:5620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.260 [2024-11-06 14:31:32.730214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.260 [2024-11-06 14:31:32.745451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:05.260 [2024-11-06 14:31:32.745604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:9964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.260 [2024-11-06 14:31:32.745630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.260 [2024-11-06 14:31:32.760976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:05.260 [2024-11-06 14:31:32.761022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.260 [2024-11-06 14:31:32.761041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.260 [2024-11-06 14:31:32.776377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:05.260 [2024-11-06 14:31:32.776541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:5503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.260 [2024-11-06 14:31:32.776561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.260 [2024-11-06 14:31:32.791900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:05.260 [2024-11-06 14:31:32.791950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:18097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.260 [2024-11-06 14:31:32.791965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.260 [2024-11-06 14:31:32.807268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:05.260 [2024-11-06 14:31:32.807425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:4895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.261 [2024-11-06 14:31:32.807450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.261 [2024-11-06 14:31:32.822680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:05.261 [2024-11-06 14:31:32.822828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:5066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.261 [2024-11-06 14:31:32.822869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.261 [2024-11-06 14:31:32.838054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:05.261 [2024-11-06 14:31:32.838102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:12120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.261 [2024-11-06 14:31:32.838117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.261 [2024-11-06 14:31:32.853274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:05.261 [2024-11-06 14:31:32.853451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:3999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.261 [2024-11-06 14:31:32.853471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.261 [2024-11-06 14:31:32.868680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:05.261 [2024-11-06 14:31:32.868842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:6949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.261 [2024-11-06 14:31:32.868880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.261 [2024-11-06 14:31:32.884151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:05.261 [2024-11-06 14:31:32.884194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:5352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.261 [2024-11-06 14:31:32.884213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.521 [2024-11-06 14:31:32.899423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:05.521 [2024-11-06 14:31:32.899586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:24535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.521 [2024-11-06 14:31:32.899605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.521 [2024-11-06 14:31:32.914817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:05.521 [2024-11-06 14:31:32.914983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:8784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.521 [2024-11-06 14:31:32.915003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.521 [2024-11-06 14:31:32.930144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:05.521 [2024-11-06 14:31:32.930305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:18703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.521 [2024-11-06 14:31:32.930335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.521 [2024-11-06 14:31:32.945537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:05.522 [2024-11-06 14:31:32.945700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:14309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.522 [2024-11-06 14:31:32.945725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.522 [2024-11-06 14:31:32.960967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:05.522 [2024-11-06 14:31:32.961014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:10660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.522 [2024-11-06 14:31:32.961045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.522 [2024-11-06 14:31:32.976347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:05.522 [2024-11-06 14:31:32.976511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:10120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.522 [2024-11-06 14:31:32.976531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.522 [2024-11-06 14:31:32.991850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:05.522 [2024-11-06 14:31:32.991893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:3996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.522 [2024-11-06 14:31:32.991911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.522 [2024-11-06 14:31:33.007109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:05.522 [2024-11-06 14:31:33.007264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:20574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.522 [2024-11-06 14:31:33.007289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.522 [2024-11-06 14:31:33.022472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:05.522 [2024-11-06 14:31:33.022657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:11030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.522 [2024-11-06 14:31:33.022677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.522 [2024-11-06 14:31:33.037866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:05.522 [2024-11-06 14:31:33.037915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:24892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.522 [2024-11-06 14:31:33.037946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.522 [2024-11-06 14:31:33.053153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:05.522 [2024-11-06 14:31:33.053322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:10301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.522 [2024-11-06 14:31:33.053347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.522 [2024-11-06 14:31:33.068503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:05.522 [2024-11-06 14:31:33.068667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:7709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.522 [2024-11-06 14:31:33.068693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.522 [2024-11-06 14:31:33.083894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:05.522 [2024-11-06 14:31:33.083942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:14389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.522 [2024-11-06 14:31:33.083972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.522 16193.00 IOPS, 63.25 MiB/s [2024-11-06T14:31:33.157Z] [2024-11-06 14:31:33.107222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:05.522 [2024-11-06 14:31:33.107375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:24717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.522 [2024-11-06 14:31:33.107402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.522 [2024-11-06 14:31:33.122569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:05.522 [2024-11-06 14:31:33.122721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:22800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.522 [2024-11-06 14:31:33.122748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.522 [2024-11-06 14:31:33.137909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:05.522 [2024-11-06 14:31:33.137958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:21277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.522 [2024-11-06 14:31:33.137989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.522 [2024-11-06 14:31:33.153354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:05.522 [2024-11-06 14:31:33.153529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:10424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.522 [2024-11-06 14:31:33.153549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.782 [2024-11-06 14:31:33.168770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:05.782 [2024-11-06 14:31:33.168945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:15312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.782 [2024-11-06 14:31:33.168971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.782 [2024-11-06 14:31:33.184298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:05.782 [2024-11-06 14:31:33.184447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:21221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.782 [2024-11-06 14:31:33.184473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.782 [2024-11-06 14:31:33.199759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:05.782 [2024-11-06 14:31:33.199934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:8587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.782 [2024-11-06 14:31:33.199954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.782 [2024-11-06 14:31:33.215191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:05.782 [2024-11-06 14:31:33.215351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:7688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.782 [2024-11-06 14:31:33.215371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.782 [2024-11-06 14:31:33.230623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:05.782 [2024-11-06 14:31:33.230772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:10376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.782 [2024-11-06 14:31:33.230797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.782 [2024-11-06 14:31:33.246159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:05.782 [2024-11-06 14:31:33.246203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:1852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.782 [2024-11-06 14:31:33.246239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.782 [2024-11-06 14:31:33.261541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:05.782 [2024-11-06 14:31:33.261722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:10690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.782 [2024-11-06 14:31:33.261741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.782 [2024-11-06 14:31:33.277083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:05.782 [2024-11-06 14:31:33.277135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:18747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.782 [2024-11-06 14:31:33.277151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.782 [2024-11-06 14:31:33.292446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:05.782 [2024-11-06 14:31:33.292603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:10600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.782 [2024-11-06 14:31:33.292628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.782 [2024-11-06 14:31:33.307979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:05.782 [2024-11-06 14:31:33.308023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:4676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.782 [2024-11-06 14:31:33.308042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.782 [2024-11-06 14:31:33.323306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:05.782 [2024-11-06 14:31:33.323478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:12365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.782 [2024-11-06 14:31:33.323499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.782 [2024-11-06 14:31:33.338925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:05.783 [2024-11-06 14:31:33.338972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:23831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.783 [2024-11-06 14:31:33.338987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.783 [2024-11-06 14:31:33.354292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:05.783 [2024-11-06 14:31:33.354455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:18518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.783 [2024-11-06 14:31:33.354480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.783 [2024-11-06 14:31:33.369902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:05.783 [2024-11-06 14:31:33.369946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:14556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.783 [2024-11-06 14:31:33.369966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.783 [2024-11-06 14:31:33.385301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:05.783 [2024-11-06 14:31:33.385467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:25577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.783 [2024-11-06 14:31:33.385488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.783 [2024-11-06 14:31:33.400868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:05.783 [2024-11-06 14:31:33.400917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:23 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.783 [2024-11-06 14:31:33.400933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.043 [2024-11-06 14:31:33.416266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:06.043 [2024-11-06 14:31:33.416437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:20579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.043 [2024-11-06 14:31:33.416463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.043 [2024-11-06 14:31:33.431822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:06.043 [2024-11-06 14:31:33.432017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:11925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.043 [2024-11-06 14:31:33.432047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.043 [2024-11-06 14:31:33.447354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:06.043 [2024-11-06 14:31:33.447513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:1920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.043 [2024-11-06 14:31:33.447533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.043 [2024-11-06 14:31:33.462883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:06.043 [2024-11-06 14:31:33.462934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:5728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.043 [2024-11-06 14:31:33.462950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.043 [2024-11-06 14:31:33.478313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:06.043 [2024-11-06 14:31:33.478471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:11218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.043 [2024-11-06 14:31:33.478505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.043 [2024-11-06 14:31:33.493902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:06.043 [2024-11-06 14:31:33.493946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:1965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.043 [2024-11-06 14:31:33.493968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.043 [2024-11-06 14:31:33.509379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:06.043 [2024-11-06 14:31:33.509543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:22573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.043 [2024-11-06 14:31:33.509563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.043 [2024-11-06 14:31:33.525073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:06.043 [2024-11-06 14:31:33.525122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:1995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.043 [2024-11-06 14:31:33.525137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.043 [2024-11-06 14:31:33.540545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:06.043 [2024-11-06 14:31:33.540698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:14872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.043 [2024-11-06 14:31:33.540727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.043 [2024-11-06 14:31:33.556127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:06.043 [2024-11-06 14:31:33.556169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:16148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.043 [2024-11-06 14:31:33.556186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.043 [2024-11-06 14:31:33.571526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:06.043 [2024-11-06 14:31:33.571719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:21309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.043 [2024-11-06 14:31:33.571739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.043 [2024-11-06 14:31:33.587128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:06.043 [2024-11-06 14:31:33.587180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:11742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.043 [2024-11-06 14:31:33.587196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.043 [2024-11-06 14:31:33.602510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:06.043 [2024-11-06 14:31:33.602672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:15833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.043 [2024-11-06 14:31:33.602691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.043 [2024-11-06 14:31:33.618025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:06.043 [2024-11-06 14:31:33.618068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:22634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.043 [2024-11-06 14:31:33.618083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.043 [2024-11-06 14:31:33.633346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:06.043 [2024-11-06 14:31:33.633517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:13355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.043 [2024-11-06 14:31:33.633536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.043 [2024-11-06 14:31:33.648791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:06.043 [2024-11-06 14:31:33.648951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:21970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.043 [2024-11-06 14:31:33.648971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.043 [2024-11-06 14:31:33.664229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:06.043 [2024-11-06 14:31:33.664391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.043 [2024-11-06 14:31:33.664410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.382 [2024-11-06 14:31:33.679635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:06.382 [2024-11-06 14:31:33.679804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:5629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.382 [2024-11-06 14:31:33.679823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.382 [2024-11-06 14:31:33.695027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:06.382 [2024-11-06 14:31:33.695071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:21155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.382 [2024-11-06 14:31:33.695086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.382 [2024-11-06 14:31:33.710304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:06.382 [2024-11-06 14:31:33.710471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.382 [2024-11-06 14:31:33.710491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.382 [2024-11-06 14:31:33.725674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:06.382 [2024-11-06 14:31:33.725850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:16455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.382 [2024-11-06 14:31:33.725872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.382 [2024-11-06 14:31:33.741227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:06.382 [2024-11-06 14:31:33.741270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:24271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.382 [2024-11-06 14:31:33.741284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.382 [2024-11-06 14:31:33.756510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:06.382 [2024-11-06 14:31:33.756681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:3199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.382 [2024-11-06 14:31:33.756700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.382 [2024-11-06 14:31:33.771866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:06.382 [2024-11-06 14:31:33.771909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:5801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.382 [2024-11-06 14:31:33.771924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.382 [2024-11-06 14:31:33.787099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:06.382 [2024-11-06 14:31:33.787254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:15916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.382 [2024-11-06 14:31:33.787275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.382 [2024-11-06 14:31:33.802641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:06.382 [2024-11-06 14:31:33.802794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:24716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.382 [2024-11-06 14:31:33.802815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.382 [2024-11-06 14:31:33.818132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:06.382 [2024-11-06 14:31:33.818178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:4044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.382 [2024-11-06 14:31:33.818193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.382 [2024-11-06 14:31:33.833508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:06.382 [2024-11-06 14:31:33.833665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:23146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.382 [2024-11-06 14:31:33.833685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.382 [2024-11-06 14:31:33.849039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:06.382 [2024-11-06 14:31:33.849084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:23543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.382 [2024-11-06 14:31:33.849099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.382 [2024-11-06 14:31:33.864422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:06.382 [2024-11-06 14:31:33.864580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:8599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.382 [2024-11-06 14:31:33.864600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.382 [2024-11-06 14:31:33.879931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:06.382 [2024-11-06 14:31:33.879974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.382 [2024-11-06 14:31:33.879989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.382 [2024-11-06 14:31:33.895206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:06.382 [2024-11-06 14:31:33.895362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:13769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.382 [2024-11-06 14:31:33.895382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.382 [2024-11-06 14:31:33.910625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:06.382 [2024-11-06 14:31:33.910777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:4672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.382 [2024-11-06 14:31:33.910796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.382 [2024-11-06 14:31:33.926013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:06.382 [2024-11-06 14:31:33.926057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:23294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.382 [2024-11-06 14:31:33.926073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.382 [2024-11-06 14:31:33.941235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:06.382 [2024-11-06 14:31:33.941408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:15317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.382 [2024-11-06 14:31:33.941428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.382 [2024-11-06 14:31:33.956744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:06.382 [2024-11-06 14:31:33.956899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:8815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.382 [2024-11-06 14:31:33.956935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.382 [2024-11-06 14:31:33.972161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:06.382 [2024-11-06 14:31:33.972324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.382 [2024-11-06 14:31:33.972344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.382 [2024-11-06 14:31:33.987682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:06.382 [2024-11-06 14:31:33.987856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.382 [2024-11-06 14:31:33.987876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.642 [2024-11-06 14:31:34.003121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:06.642 [2024-11-06 14:31:34.003165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.642 [2024-11-06 14:31:34.003180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.642 [2024-11-06 14:31:34.018370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:06.642 [2024-11-06 14:31:34.018532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.642 [2024-11-06 14:31:34.018552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.642 [2024-11-06 14:31:34.033709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:06.642 [2024-11-06 14:31:34.033883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.642 [2024-11-06 14:31:34.033902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.642 [2024-11-06 14:31:34.049115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:06.642 [2024-11-06 14:31:34.049277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.642 [2024-11-06 14:31:34.049298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.642 [2024-11-06 14:31:34.064559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:06.642 [2024-11-06 14:31:34.064724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.642 [2024-11-06 14:31:34.064744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.642 [2024-11-06 14:31:34.079997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:06.642 [2024-11-06 14:31:34.080040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.642 [2024-11-06 14:31:34.080054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.642 16319.50 IOPS, 63.75 MiB/s [2024-11-06T14:31:34.278Z] [2024-11-06 14:31:34.101912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:06.643 [2024-11-06 14:31:34.101954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.643 [2024-11-06 14:31:34.101969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.643 00:27:06.643 Latency(us) 00:27:06.643 [2024-11-06T14:31:34.278Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:06.643 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:06.643 nvme0n1 : 2.01 16317.61 63.74 0.00 0.00 7839.50 7264.23 31162.50 00:27:06.643 [2024-11-06T14:31:34.278Z] =================================================================================================================== 00:27:06.643 [2024-11-06T14:31:34.278Z] Total : 16317.61 63.74 0.00 0.00 7839.50 7264.23 31162.50 00:27:06.643 { 00:27:06.643 "results": [ 00:27:06.643 { 00:27:06.643 "job": "nvme0n1", 00:27:06.643 "core_mask": "0x2", 00:27:06.643 "workload": "randread", 00:27:06.643 "status": "finished", 00:27:06.643 "queue_depth": 128, 00:27:06.643 "io_size": 4096, 00:27:06.643 "runtime": 2.008076, 00:27:06.643 "iops": 16317.6094928678, 00:27:06.643 "mibps": 63.74066208151484, 00:27:06.643 "io_failed": 0, 00:27:06.643 "io_timeout": 0, 00:27:06.643 "avg_latency_us": 7839.496270552347, 00:27:06.643 "min_latency_us": 7264.231325301204, 00:27:06.643 "max_latency_us": 31162.499598393573 00:27:06.643 } 00:27:06.643 ], 00:27:06.643 "core_count": 1 00:27:06.643 } 00:27:06.643 14:31:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:06.643 14:31:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:06.643 14:31:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:06.643 | .driver_specific 00:27:06.643 | .nvme_error 00:27:06.643 | .status_code 00:27:06.643 | .command_transient_transport_error' 00:27:06.643 14:31:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:06.902 14:31:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 128 > 0 )) 00:27:06.902 14:31:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 87277 00:27:06.902 14:31:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 87277 ']' 00:27:06.902 14:31:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 87277 00:27:06.902 14:31:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:27:06.902 14:31:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:06.902 14:31:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 87277 00:27:06.902 14:31:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:27:06.902 killing process with pid 87277 00:27:06.902 Received shutdown signal, test time was about 2.000000 seconds 00:27:06.902 00:27:06.902 Latency(us) 00:27:06.902 [2024-11-06T14:31:34.537Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:06.902 [2024-11-06T14:31:34.537Z] =================================================================================================================== 00:27:06.902 [2024-11-06T14:31:34.537Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:06.902 14:31:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:27:06.902 14:31:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 87277' 00:27:06.902 14:31:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 87277 00:27:06.902 14:31:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 87277 00:27:07.839 14:31:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:27:07.839 14:31:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:07.840 14:31:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:27:07.840 14:31:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:07.840 14:31:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:07.840 14:31:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:27:07.840 14:31:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=87338 00:27:07.840 14:31:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 87338 /var/tmp/bperf.sock 00:27:07.840 14:31:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 87338 ']' 00:27:07.840 14:31:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:07.840 14:31:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:07.840 14:31:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:07.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:07.840 14:31:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:07.840 14:31:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:08.099 [2024-11-06 14:31:35.561124] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:27:08.099 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:08.099 Zero copy mechanism will not be used. 00:27:08.099 [2024-11-06 14:31:35.561464] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87338 ] 00:27:08.357 [2024-11-06 14:31:35.745606] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:08.357 [2024-11-06 14:31:35.890818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:08.616 [2024-11-06 14:31:36.139748] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:27:08.875 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:08.875 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:27:08.875 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:08.875 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:09.134 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:09.134 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.134 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:09.134 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.134 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:09.134 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:09.393 nvme0n1 00:27:09.393 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:09.393 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.393 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:09.393 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.393 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:09.393 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:09.393 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:09.393 Zero copy mechanism will not be used. 00:27:09.393 Running I/O for 2 seconds... 00:27:09.393 [2024-11-06 14:31:37.000943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.393 [2024-11-06 14:31:37.001186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.393 [2024-11-06 14:31:37.001345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.393 [2024-11-06 14:31:37.006163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.393 [2024-11-06 14:31:37.006351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.393 [2024-11-06 14:31:37.006473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.393 [2024-11-06 14:31:37.011216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.393 [2024-11-06 14:31:37.011400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.394 [2024-11-06 14:31:37.011520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.394 [2024-11-06 14:31:37.016271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.394 [2024-11-06 14:31:37.016442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.394 [2024-11-06 14:31:37.016463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.394 [2024-11-06 14:31:37.021176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.394 [2024-11-06 14:31:37.021351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.394 [2024-11-06 14:31:37.021372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.394 [2024-11-06 14:31:37.026137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.394 [2024-11-06 14:31:37.026183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.394 [2024-11-06 14:31:37.026206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.655 [2024-11-06 14:31:37.030854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.655 [2024-11-06 14:31:37.030897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.655 [2024-11-06 14:31:37.030917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.655 [2024-11-06 14:31:37.035588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.655 [2024-11-06 14:31:37.035644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.655 [2024-11-06 14:31:37.035661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.655 [2024-11-06 14:31:37.040333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.655 [2024-11-06 14:31:37.040499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.655 [2024-11-06 14:31:37.040520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.655 [2024-11-06 14:31:37.045215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.655 [2024-11-06 14:31:37.045262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.655 [2024-11-06 14:31:37.045283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.655 [2024-11-06 14:31:37.049921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.655 [2024-11-06 14:31:37.049959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.655 [2024-11-06 14:31:37.049979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.655 [2024-11-06 14:31:37.054645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.655 [2024-11-06 14:31:37.054697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.655 [2024-11-06 14:31:37.054714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.655 [2024-11-06 14:31:37.059419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.655 [2024-11-06 14:31:37.059584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.655 [2024-11-06 14:31:37.059611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.655 [2024-11-06 14:31:37.064291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.655 [2024-11-06 14:31:37.064338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.655 [2024-11-06 14:31:37.064363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.655 [2024-11-06 14:31:37.069100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.655 [2024-11-06 14:31:37.069156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.655 [2024-11-06 14:31:37.069173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.655 [2024-11-06 14:31:37.073955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.655 [2024-11-06 14:31:37.074003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.655 [2024-11-06 14:31:37.074020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.655 [2024-11-06 14:31:37.078757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.655 [2024-11-06 14:31:37.078805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.655 [2024-11-06 14:31:37.078825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.655 [2024-11-06 14:31:37.083548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.655 [2024-11-06 14:31:37.083594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.655 [2024-11-06 14:31:37.083614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.655 [2024-11-06 14:31:37.088287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.655 [2024-11-06 14:31:37.088343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.655 [2024-11-06 14:31:37.088360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.655 [2024-11-06 14:31:37.093005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.655 [2024-11-06 14:31:37.093174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.655 [2024-11-06 14:31:37.093195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.655 [2024-11-06 14:31:37.097904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.655 [2024-11-06 14:31:37.097945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.655 [2024-11-06 14:31:37.097965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.655 [2024-11-06 14:31:37.102631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.655 [2024-11-06 14:31:37.102676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.655 [2024-11-06 14:31:37.102696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.655 [2024-11-06 14:31:37.107444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.655 [2024-11-06 14:31:37.107497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.655 [2024-11-06 14:31:37.107514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.655 [2024-11-06 14:31:37.112233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.655 [2024-11-06 14:31:37.112288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.655 [2024-11-06 14:31:37.112305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.655 [2024-11-06 14:31:37.117053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.655 [2024-11-06 14:31:37.117100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.655 [2024-11-06 14:31:37.117120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.655 [2024-11-06 14:31:37.121753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.655 [2024-11-06 14:31:37.121799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.655 [2024-11-06 14:31:37.121819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.655 [2024-11-06 14:31:37.126561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.655 [2024-11-06 14:31:37.126617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.655 [2024-11-06 14:31:37.126644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.655 [2024-11-06 14:31:37.131425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.655 [2024-11-06 14:31:37.131619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.655 [2024-11-06 14:31:37.131641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.655 [2024-11-06 14:31:37.136346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.655 [2024-11-06 14:31:37.136394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.655 [2024-11-06 14:31:37.136414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.655 [2024-11-06 14:31:37.141117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.656 [2024-11-06 14:31:37.141166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.656 [2024-11-06 14:31:37.141187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.656 [2024-11-06 14:31:37.145936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.656 [2024-11-06 14:31:37.145989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.656 [2024-11-06 14:31:37.146020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.656 [2024-11-06 14:31:37.150779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.656 [2024-11-06 14:31:37.150833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.656 [2024-11-06 14:31:37.150861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.656 [2024-11-06 14:31:37.155560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.656 [2024-11-06 14:31:37.155608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.656 [2024-11-06 14:31:37.155628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.656 [2024-11-06 14:31:37.160385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.656 [2024-11-06 14:31:37.160432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.656 [2024-11-06 14:31:37.160453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.656 [2024-11-06 14:31:37.165150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.656 [2024-11-06 14:31:37.165325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.656 [2024-11-06 14:31:37.165347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.656 [2024-11-06 14:31:37.170132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.656 [2024-11-06 14:31:37.170184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.656 [2024-11-06 14:31:37.170200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.656 [2024-11-06 14:31:37.174940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.656 [2024-11-06 14:31:37.174984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.656 [2024-11-06 14:31:37.175008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.656 [2024-11-06 14:31:37.179730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.656 [2024-11-06 14:31:37.179786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.656 [2024-11-06 14:31:37.179803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.656 [2024-11-06 14:31:37.184491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.656 [2024-11-06 14:31:37.184665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.656 [2024-11-06 14:31:37.184686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.656 [2024-11-06 14:31:37.189399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.656 [2024-11-06 14:31:37.189565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.656 [2024-11-06 14:31:37.189678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.656 [2024-11-06 14:31:37.194441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.656 [2024-11-06 14:31:37.194623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.656 [2024-11-06 14:31:37.194803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.656 [2024-11-06 14:31:37.199684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.656 [2024-11-06 14:31:37.199882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.656 [2024-11-06 14:31:37.200005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.656 [2024-11-06 14:31:37.204765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.656 [2024-11-06 14:31:37.204955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.656 [2024-11-06 14:31:37.205111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.656 [2024-11-06 14:31:37.209936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.656 [2024-11-06 14:31:37.210109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.656 [2024-11-06 14:31:37.210212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.656 [2024-11-06 14:31:37.215043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.656 [2024-11-06 14:31:37.215216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.656 [2024-11-06 14:31:37.215387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.656 [2024-11-06 14:31:37.220135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.656 [2024-11-06 14:31:37.220317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.656 [2024-11-06 14:31:37.220454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.656 [2024-11-06 14:31:37.225160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.656 [2024-11-06 14:31:37.225340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.656 [2024-11-06 14:31:37.225522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.656 [2024-11-06 14:31:37.230253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.656 [2024-11-06 14:31:37.230426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.656 [2024-11-06 14:31:37.230546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.656 [2024-11-06 14:31:37.235242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.656 [2024-11-06 14:31:37.235415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.656 [2024-11-06 14:31:37.235560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.656 [2024-11-06 14:31:37.240290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.656 [2024-11-06 14:31:37.240474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.656 [2024-11-06 14:31:37.240624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.656 [2024-11-06 14:31:37.245360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.656 [2024-11-06 14:31:37.245546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.656 [2024-11-06 14:31:37.245724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.656 [2024-11-06 14:31:37.250443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.656 [2024-11-06 14:31:37.250623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.656 [2024-11-06 14:31:37.250734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.656 [2024-11-06 14:31:37.255450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.656 [2024-11-06 14:31:37.255624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.656 [2024-11-06 14:31:37.255764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.656 [2024-11-06 14:31:37.260501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.656 [2024-11-06 14:31:37.260684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.656 [2024-11-06 14:31:37.260788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.656 [2024-11-06 14:31:37.265453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.656 [2024-11-06 14:31:37.265634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.656 [2024-11-06 14:31:37.265658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.656 [2024-11-06 14:31:37.270302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.656 [2024-11-06 14:31:37.270348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.656 [2024-11-06 14:31:37.270370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.657 [2024-11-06 14:31:37.275002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.657 [2024-11-06 14:31:37.275046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.657 [2024-11-06 14:31:37.275066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.657 [2024-11-06 14:31:37.279661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.657 [2024-11-06 14:31:37.279713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.657 [2024-11-06 14:31:37.279730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.657 [2024-11-06 14:31:37.284357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.657 [2024-11-06 14:31:37.284409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.657 [2024-11-06 14:31:37.284441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.918 [2024-11-06 14:31:37.289100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.918 [2024-11-06 14:31:37.289146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.918 [2024-11-06 14:31:37.289169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.918 [2024-11-06 14:31:37.293779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.918 [2024-11-06 14:31:37.293825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.918 [2024-11-06 14:31:37.293861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.918 [2024-11-06 14:31:37.298424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.918 [2024-11-06 14:31:37.298476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.918 [2024-11-06 14:31:37.298493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.918 [2024-11-06 14:31:37.303179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.918 [2024-11-06 14:31:37.303232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.918 [2024-11-06 14:31:37.303248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.918 [2024-11-06 14:31:37.307867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.918 [2024-11-06 14:31:37.307909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.918 [2024-11-06 14:31:37.307928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.918 [2024-11-06 14:31:37.312519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.918 [2024-11-06 14:31:37.312563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.918 [2024-11-06 14:31:37.312598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.918 [2024-11-06 14:31:37.317220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.918 [2024-11-06 14:31:37.317273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.918 [2024-11-06 14:31:37.317288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.918 [2024-11-06 14:31:37.321901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.918 [2024-11-06 14:31:37.321947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.918 [2024-11-06 14:31:37.321963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.918 [2024-11-06 14:31:37.326602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.918 [2024-11-06 14:31:37.326644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.918 [2024-11-06 14:31:37.326665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.918 [2024-11-06 14:31:37.331344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.918 [2024-11-06 14:31:37.331392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.918 [2024-11-06 14:31:37.331417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.918 [2024-11-06 14:31:37.336023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.918 [2024-11-06 14:31:37.336073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.918 [2024-11-06 14:31:37.336104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.918 [2024-11-06 14:31:37.340750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.918 [2024-11-06 14:31:37.340794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.918 [2024-11-06 14:31:37.340813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.918 [2024-11-06 14:31:37.345442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.918 [2024-11-06 14:31:37.345488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.918 [2024-11-06 14:31:37.345507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.918 [2024-11-06 14:31:37.350112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.918 [2024-11-06 14:31:37.350160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.918 [2024-11-06 14:31:37.350192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.918 [2024-11-06 14:31:37.354793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.918 [2024-11-06 14:31:37.354885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.918 [2024-11-06 14:31:37.354902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.918 [2024-11-06 14:31:37.359605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.918 [2024-11-06 14:31:37.359653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.918 [2024-11-06 14:31:37.359673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.918 [2024-11-06 14:31:37.364337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.918 [2024-11-06 14:31:37.364383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.918 [2024-11-06 14:31:37.364418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.918 [2024-11-06 14:31:37.369076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.918 [2024-11-06 14:31:37.369128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.918 [2024-11-06 14:31:37.369144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.918 [2024-11-06 14:31:37.373787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.918 [2024-11-06 14:31:37.373856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.918 [2024-11-06 14:31:37.373874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.918 [2024-11-06 14:31:37.378527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.918 [2024-11-06 14:31:37.378572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.918 [2024-11-06 14:31:37.378591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.919 [2024-11-06 14:31:37.383215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.919 [2024-11-06 14:31:37.383261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.919 [2024-11-06 14:31:37.383283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.919 [2024-11-06 14:31:37.387899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.919 [2024-11-06 14:31:37.387948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.919 [2024-11-06 14:31:37.387979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.919 [2024-11-06 14:31:37.392576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.919 [2024-11-06 14:31:37.392629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.919 [2024-11-06 14:31:37.392645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.919 [2024-11-06 14:31:37.397237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.919 [2024-11-06 14:31:37.397282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.919 [2024-11-06 14:31:37.397305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.919 [2024-11-06 14:31:37.402037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.919 [2024-11-06 14:31:37.402195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.919 [2024-11-06 14:31:37.402221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.919 [2024-11-06 14:31:37.406900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.919 [2024-11-06 14:31:37.406951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.919 [2024-11-06 14:31:37.406967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.919 [2024-11-06 14:31:37.411583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.919 [2024-11-06 14:31:37.411647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.919 [2024-11-06 14:31:37.411679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.919 [2024-11-06 14:31:37.416341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.919 [2024-11-06 14:31:37.416387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.919 [2024-11-06 14:31:37.416406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.919 [2024-11-06 14:31:37.421099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.919 [2024-11-06 14:31:37.421275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.919 [2024-11-06 14:31:37.421296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.919 [2024-11-06 14:31:37.425976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.919 [2024-11-06 14:31:37.426025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.919 [2024-11-06 14:31:37.426041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.919 [2024-11-06 14:31:37.430653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.919 [2024-11-06 14:31:37.430699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.919 [2024-11-06 14:31:37.430719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.919 [2024-11-06 14:31:37.435345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.919 [2024-11-06 14:31:37.435393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.919 [2024-11-06 14:31:37.435413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.919 [2024-11-06 14:31:37.440027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.919 [2024-11-06 14:31:37.440207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.919 [2024-11-06 14:31:37.440228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.919 [2024-11-06 14:31:37.444910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.919 [2024-11-06 14:31:37.444961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.919 [2024-11-06 14:31:37.444978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.919 [2024-11-06 14:31:37.449669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.919 [2024-11-06 14:31:37.449716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.919 [2024-11-06 14:31:37.449736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.919 [2024-11-06 14:31:37.454345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.919 [2024-11-06 14:31:37.454390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.919 [2024-11-06 14:31:37.454410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.919 [2024-11-06 14:31:37.459104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.919 [2024-11-06 14:31:37.459275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.919 [2024-11-06 14:31:37.459296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.919 [2024-11-06 14:31:37.463937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.919 [2024-11-06 14:31:37.463991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.919 [2024-11-06 14:31:37.464007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.919 [2024-11-06 14:31:37.468611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.919 [2024-11-06 14:31:37.468657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.919 [2024-11-06 14:31:37.468692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.919 [2024-11-06 14:31:37.473419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.919 [2024-11-06 14:31:37.473465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.919 [2024-11-06 14:31:37.473484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.919 [2024-11-06 14:31:37.478090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.919 [2024-11-06 14:31:37.478256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.919 [2024-11-06 14:31:37.478276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.919 [2024-11-06 14:31:37.482910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.919 [2024-11-06 14:31:37.482960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.919 [2024-11-06 14:31:37.482977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.919 [2024-11-06 14:31:37.487598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.919 [2024-11-06 14:31:37.487645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.919 [2024-11-06 14:31:37.487665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.919 [2024-11-06 14:31:37.492267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.919 [2024-11-06 14:31:37.492314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.919 [2024-11-06 14:31:37.492357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.919 [2024-11-06 14:31:37.497031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.919 [2024-11-06 14:31:37.497209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.920 [2024-11-06 14:31:37.497230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.920 [2024-11-06 14:31:37.501970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.920 [2024-11-06 14:31:37.502018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.920 [2024-11-06 14:31:37.502035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.920 [2024-11-06 14:31:37.506685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.920 [2024-11-06 14:31:37.506731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.920 [2024-11-06 14:31:37.506754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.920 [2024-11-06 14:31:37.511391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.920 [2024-11-06 14:31:37.511444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.920 [2024-11-06 14:31:37.511461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.920 [2024-11-06 14:31:37.516124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.920 [2024-11-06 14:31:37.516177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.920 [2024-11-06 14:31:37.516193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.920 [2024-11-06 14:31:37.520830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.920 [2024-11-06 14:31:37.520902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.920 [2024-11-06 14:31:37.520925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.920 [2024-11-06 14:31:37.525525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.920 [2024-11-06 14:31:37.525570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.920 [2024-11-06 14:31:37.525605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.920 [2024-11-06 14:31:37.530248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.920 [2024-11-06 14:31:37.530302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.920 [2024-11-06 14:31:37.530318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.920 [2024-11-06 14:31:37.534932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.920 [2024-11-06 14:31:37.534981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.920 [2024-11-06 14:31:37.534998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.920 [2024-11-06 14:31:37.539608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.920 [2024-11-06 14:31:37.539665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.920 [2024-11-06 14:31:37.539685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.920 [2024-11-06 14:31:37.544304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.920 [2024-11-06 14:31:37.544350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.920 [2024-11-06 14:31:37.544369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.920 [2024-11-06 14:31:37.548949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:09.920 [2024-11-06 14:31:37.548998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.920 [2024-11-06 14:31:37.549029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.181 [2024-11-06 14:31:37.553653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.181 [2024-11-06 14:31:37.553707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.181 [2024-11-06 14:31:37.553739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.181 [2024-11-06 14:31:37.558433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.181 [2024-11-06 14:31:37.558477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.181 [2024-11-06 14:31:37.558506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.181 [2024-11-06 14:31:37.563162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.181 [2024-11-06 14:31:37.563207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.181 [2024-11-06 14:31:37.563227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.181 [2024-11-06 14:31:37.567810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.181 [2024-11-06 14:31:37.567892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.181 [2024-11-06 14:31:37.567909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.181 [2024-11-06 14:31:37.572442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.181 [2024-11-06 14:31:37.572505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.181 [2024-11-06 14:31:37.572521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.181 [2024-11-06 14:31:37.577127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.181 [2024-11-06 14:31:37.577171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.181 [2024-11-06 14:31:37.577190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.181 [2024-11-06 14:31:37.581815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.181 [2024-11-06 14:31:37.581869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.181 [2024-11-06 14:31:37.581893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.181 [2024-11-06 14:31:37.586494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.181 [2024-11-06 14:31:37.586554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.181 [2024-11-06 14:31:37.586570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.181 [2024-11-06 14:31:37.591151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.181 [2024-11-06 14:31:37.591196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.181 [2024-11-06 14:31:37.591216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.181 [2024-11-06 14:31:37.595789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.181 [2024-11-06 14:31:37.595854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.181 [2024-11-06 14:31:37.595872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.181 [2024-11-06 14:31:37.600515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.181 [2024-11-06 14:31:37.600677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.181 [2024-11-06 14:31:37.600697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.181 [2024-11-06 14:31:37.605374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.181 [2024-11-06 14:31:37.605422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.181 [2024-11-06 14:31:37.605439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.181 [2024-11-06 14:31:37.610118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.181 [2024-11-06 14:31:37.610163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.181 [2024-11-06 14:31:37.610179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.181 [2024-11-06 14:31:37.614787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.181 [2024-11-06 14:31:37.614832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.181 [2024-11-06 14:31:37.614865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.181 [2024-11-06 14:31:37.619444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.181 [2024-11-06 14:31:37.619606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.181 [2024-11-06 14:31:37.619627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.182 [2024-11-06 14:31:37.624256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.182 [2024-11-06 14:31:37.624299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.182 [2024-11-06 14:31:37.624315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.182 [2024-11-06 14:31:37.628982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.182 [2024-11-06 14:31:37.629026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.182 [2024-11-06 14:31:37.629042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.182 [2024-11-06 14:31:37.633698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.182 [2024-11-06 14:31:37.633745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.182 [2024-11-06 14:31:37.633760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.182 [2024-11-06 14:31:37.638323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.182 [2024-11-06 14:31:37.638481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.182 [2024-11-06 14:31:37.638508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.182 [2024-11-06 14:31:37.643138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.182 [2024-11-06 14:31:37.643185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.182 [2024-11-06 14:31:37.643201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.182 [2024-11-06 14:31:37.647793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.182 [2024-11-06 14:31:37.647855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.182 [2024-11-06 14:31:37.647888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.182 [2024-11-06 14:31:37.652475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.182 [2024-11-06 14:31:37.652522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.182 [2024-11-06 14:31:37.652538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.182 [2024-11-06 14:31:37.657160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.182 [2024-11-06 14:31:37.657322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.182 [2024-11-06 14:31:37.657342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.182 [2024-11-06 14:31:37.662022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.182 [2024-11-06 14:31:37.662064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.182 [2024-11-06 14:31:37.662080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.182 [2024-11-06 14:31:37.666685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.182 [2024-11-06 14:31:37.666730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.182 [2024-11-06 14:31:37.666746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.182 [2024-11-06 14:31:37.671427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.182 [2024-11-06 14:31:37.671474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.182 [2024-11-06 14:31:37.671490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.182 [2024-11-06 14:31:37.676088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.182 [2024-11-06 14:31:37.676133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.182 [2024-11-06 14:31:37.676148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.182 [2024-11-06 14:31:37.680662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.182 [2024-11-06 14:31:37.680707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.182 [2024-11-06 14:31:37.680722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.182 [2024-11-06 14:31:37.685409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.182 [2024-11-06 14:31:37.685456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.182 [2024-11-06 14:31:37.685472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.182 [2024-11-06 14:31:37.690101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.182 [2024-11-06 14:31:37.690145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.182 [2024-11-06 14:31:37.690160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.182 [2024-11-06 14:31:37.694762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.182 [2024-11-06 14:31:37.694809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.182 [2024-11-06 14:31:37.694824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.182 [2024-11-06 14:31:37.699435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.182 [2024-11-06 14:31:37.699594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.182 [2024-11-06 14:31:37.699615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.182 [2024-11-06 14:31:37.704243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.182 [2024-11-06 14:31:37.704290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.182 [2024-11-06 14:31:37.704306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.182 [2024-11-06 14:31:37.708814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.182 [2024-11-06 14:31:37.708870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.182 [2024-11-06 14:31:37.708887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.182 [2024-11-06 14:31:37.713524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.182 [2024-11-06 14:31:37.713571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.182 [2024-11-06 14:31:37.713587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.182 [2024-11-06 14:31:37.718233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.182 [2024-11-06 14:31:37.718392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.182 [2024-11-06 14:31:37.718413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.182 [2024-11-06 14:31:37.723084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.182 [2024-11-06 14:31:37.723132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.182 [2024-11-06 14:31:37.723148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.182 [2024-11-06 14:31:37.727781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.182 [2024-11-06 14:31:37.727827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.182 [2024-11-06 14:31:37.727860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.182 [2024-11-06 14:31:37.732494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.182 [2024-11-06 14:31:37.732540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.182 [2024-11-06 14:31:37.732556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.182 [2024-11-06 14:31:37.737185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.182 [2024-11-06 14:31:37.737347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.182 [2024-11-06 14:31:37.737367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.182 [2024-11-06 14:31:37.741993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.182 [2024-11-06 14:31:37.742035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.182 [2024-11-06 14:31:37.742051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.182 [2024-11-06 14:31:37.746646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.183 [2024-11-06 14:31:37.746692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.183 [2024-11-06 14:31:37.746708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.183 [2024-11-06 14:31:37.751310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.183 [2024-11-06 14:31:37.751356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.183 [2024-11-06 14:31:37.751372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.183 [2024-11-06 14:31:37.756017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.183 [2024-11-06 14:31:37.756177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.183 [2024-11-06 14:31:37.756211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.183 [2024-11-06 14:31:37.760802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.183 [2024-11-06 14:31:37.760864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.183 [2024-11-06 14:31:37.760881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.183 [2024-11-06 14:31:37.765524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.183 [2024-11-06 14:31:37.765570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.183 [2024-11-06 14:31:37.765586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.183 [2024-11-06 14:31:37.770275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.183 [2024-11-06 14:31:37.770319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.183 [2024-11-06 14:31:37.770335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.183 [2024-11-06 14:31:37.775004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.183 [2024-11-06 14:31:37.775161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.183 [2024-11-06 14:31:37.775182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.183 [2024-11-06 14:31:37.779797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.183 [2024-11-06 14:31:37.779858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.183 [2024-11-06 14:31:37.779875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.183 [2024-11-06 14:31:37.784506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.183 [2024-11-06 14:31:37.784552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.183 [2024-11-06 14:31:37.784569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.183 [2024-11-06 14:31:37.789129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.183 [2024-11-06 14:31:37.789174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.183 [2024-11-06 14:31:37.789190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.183 [2024-11-06 14:31:37.793754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.183 [2024-11-06 14:31:37.793798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.183 [2024-11-06 14:31:37.793814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.183 [2024-11-06 14:31:37.798438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.183 [2024-11-06 14:31:37.798484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.183 [2024-11-06 14:31:37.798507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.183 [2024-11-06 14:31:37.803182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.183 [2024-11-06 14:31:37.803229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.183 [2024-11-06 14:31:37.803245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.183 [2024-11-06 14:31:37.807833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.183 [2024-11-06 14:31:37.807905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.183 [2024-11-06 14:31:37.807921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.183 [2024-11-06 14:31:37.812493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.183 [2024-11-06 14:31:37.812539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.183 [2024-11-06 14:31:37.812555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.445 [2024-11-06 14:31:37.817171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.445 [2024-11-06 14:31:37.817217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.445 [2024-11-06 14:31:37.817248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.445 [2024-11-06 14:31:37.821784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.445 [2024-11-06 14:31:37.821829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.445 [2024-11-06 14:31:37.821865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.445 [2024-11-06 14:31:37.826418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.445 [2024-11-06 14:31:37.826464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.445 [2024-11-06 14:31:37.826479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.445 [2024-11-06 14:31:37.831049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.445 [2024-11-06 14:31:37.831094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.445 [2024-11-06 14:31:37.831109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.445 [2024-11-06 14:31:37.835689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.445 [2024-11-06 14:31:37.835736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.445 [2024-11-06 14:31:37.835752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.445 [2024-11-06 14:31:37.840383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.445 [2024-11-06 14:31:37.840429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.445 [2024-11-06 14:31:37.840445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.445 [2024-11-06 14:31:37.845064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.445 [2024-11-06 14:31:37.845109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.445 [2024-11-06 14:31:37.845125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.445 [2024-11-06 14:31:37.849701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.445 [2024-11-06 14:31:37.849745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.445 [2024-11-06 14:31:37.849761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.445 [2024-11-06 14:31:37.854353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.445 [2024-11-06 14:31:37.854397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.445 [2024-11-06 14:31:37.854413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.445 [2024-11-06 14:31:37.859008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.445 [2024-11-06 14:31:37.859053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.445 [2024-11-06 14:31:37.859068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.445 [2024-11-06 14:31:37.863730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.445 [2024-11-06 14:31:37.863777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.445 [2024-11-06 14:31:37.863793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.445 [2024-11-06 14:31:37.868421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.445 [2024-11-06 14:31:37.868467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.445 [2024-11-06 14:31:37.868483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.445 [2024-11-06 14:31:37.873087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.445 [2024-11-06 14:31:37.873134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.445 [2024-11-06 14:31:37.873150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.445 [2024-11-06 14:31:37.877723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.445 [2024-11-06 14:31:37.877767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.445 [2024-11-06 14:31:37.877783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.445 [2024-11-06 14:31:37.882388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.445 [2024-11-06 14:31:37.882550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.445 [2024-11-06 14:31:37.882571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.445 [2024-11-06 14:31:37.887176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.445 [2024-11-06 14:31:37.887224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.445 [2024-11-06 14:31:37.887240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.445 [2024-11-06 14:31:37.891802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.445 [2024-11-06 14:31:37.891862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.445 [2024-11-06 14:31:37.891879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.445 [2024-11-06 14:31:37.896546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.445 [2024-11-06 14:31:37.896594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.445 [2024-11-06 14:31:37.896610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.445 [2024-11-06 14:31:37.901225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.445 [2024-11-06 14:31:37.901391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.445 [2024-11-06 14:31:37.901411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.445 [2024-11-06 14:31:37.906072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.445 [2024-11-06 14:31:37.906116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.445 [2024-11-06 14:31:37.906132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.445 [2024-11-06 14:31:37.910771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.445 [2024-11-06 14:31:37.910819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.445 [2024-11-06 14:31:37.910852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.445 [2024-11-06 14:31:37.915523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.445 [2024-11-06 14:31:37.915571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.445 [2024-11-06 14:31:37.915586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.445 [2024-11-06 14:31:37.920237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.445 [2024-11-06 14:31:37.920398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.445 [2024-11-06 14:31:37.920419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.445 [2024-11-06 14:31:37.925074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.445 [2024-11-06 14:31:37.925120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.445 [2024-11-06 14:31:37.925136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.445 [2024-11-06 14:31:37.929645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.445 [2024-11-06 14:31:37.929690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.445 [2024-11-06 14:31:37.929706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.445 [2024-11-06 14:31:37.934342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.445 [2024-11-06 14:31:37.934387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.446 [2024-11-06 14:31:37.934404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.446 [2024-11-06 14:31:37.939006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.446 [2024-11-06 14:31:37.939166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.446 [2024-11-06 14:31:37.939186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.446 [2024-11-06 14:31:37.943793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.446 [2024-11-06 14:31:37.943856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.446 [2024-11-06 14:31:37.943874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.446 [2024-11-06 14:31:37.948527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.446 [2024-11-06 14:31:37.948574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.446 [2024-11-06 14:31:37.948590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.446 [2024-11-06 14:31:37.953175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.446 [2024-11-06 14:31:37.953222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.446 [2024-11-06 14:31:37.953238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.446 [2024-11-06 14:31:37.957801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.446 [2024-11-06 14:31:37.957870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.446 [2024-11-06 14:31:37.957886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.446 [2024-11-06 14:31:37.962549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.446 [2024-11-06 14:31:37.962593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.446 [2024-11-06 14:31:37.962609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.446 [2024-11-06 14:31:37.967278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.446 [2024-11-06 14:31:37.967326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.446 [2024-11-06 14:31:37.967342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.446 [2024-11-06 14:31:37.972029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.446 [2024-11-06 14:31:37.972075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.446 [2024-11-06 14:31:37.972091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.446 [2024-11-06 14:31:37.976781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.446 [2024-11-06 14:31:37.976829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.446 [2024-11-06 14:31:37.976863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.446 [2024-11-06 14:31:37.981511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.446 [2024-11-06 14:31:37.981557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.446 [2024-11-06 14:31:37.981573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.446 [2024-11-06 14:31:37.986245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.446 [2024-11-06 14:31:37.986288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.446 [2024-11-06 14:31:37.986303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.446 6448.00 IOPS, 806.00 MiB/s [2024-11-06T14:31:38.081Z] [2024-11-06 14:31:37.992370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.446 [2024-11-06 14:31:37.992523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.446 [2024-11-06 14:31:37.992544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.446 [2024-11-06 14:31:37.997123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.446 [2024-11-06 14:31:37.997169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.446 [2024-11-06 14:31:37.997185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.446 [2024-11-06 14:31:38.001795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.446 [2024-11-06 14:31:38.001855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.446 [2024-11-06 14:31:38.001872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.446 [2024-11-06 14:31:38.006478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.446 [2024-11-06 14:31:38.006534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.446 [2024-11-06 14:31:38.006550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.446 [2024-11-06 14:31:38.011250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.446 [2024-11-06 14:31:38.011409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.446 [2024-11-06 14:31:38.011429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.446 [2024-11-06 14:31:38.016042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.446 [2024-11-06 14:31:38.016086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.446 [2024-11-06 14:31:38.016101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.446 [2024-11-06 14:31:38.020680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.446 [2024-11-06 14:31:38.020724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.446 [2024-11-06 14:31:38.020739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.446 [2024-11-06 14:31:38.025272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.446 [2024-11-06 14:31:38.025317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.446 [2024-11-06 14:31:38.025332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.446 [2024-11-06 14:31:38.029885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.446 [2024-11-06 14:31:38.029926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.446 [2024-11-06 14:31:38.029941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.446 [2024-11-06 14:31:38.034488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.446 [2024-11-06 14:31:38.034556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.446 [2024-11-06 14:31:38.034571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.446 [2024-11-06 14:31:38.039160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.446 [2024-11-06 14:31:38.039205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.446 [2024-11-06 14:31:38.039220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.446 [2024-11-06 14:31:38.043859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.447 [2024-11-06 14:31:38.043901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.447 [2024-11-06 14:31:38.043915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.447 [2024-11-06 14:31:38.048492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.447 [2024-11-06 14:31:38.048537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.447 [2024-11-06 14:31:38.048551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.447 [2024-11-06 14:31:38.053196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.447 [2024-11-06 14:31:38.053239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.447 [2024-11-06 14:31:38.053254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.447 [2024-11-06 14:31:38.057880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.447 [2024-11-06 14:31:38.058030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.447 [2024-11-06 14:31:38.058050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.447 [2024-11-06 14:31:38.062607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.447 [2024-11-06 14:31:38.062650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.447 [2024-11-06 14:31:38.062665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.447 [2024-11-06 14:31:38.067267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.447 [2024-11-06 14:31:38.067314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.447 [2024-11-06 14:31:38.067329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.447 [2024-11-06 14:31:38.071903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.447 [2024-11-06 14:31:38.071946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.447 [2024-11-06 14:31:38.071961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.447 [2024-11-06 14:31:38.076446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.447 [2024-11-06 14:31:38.076492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.447 [2024-11-06 14:31:38.076506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.707 [2024-11-06 14:31:38.081102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.707 [2024-11-06 14:31:38.081146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.707 [2024-11-06 14:31:38.081162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.707 [2024-11-06 14:31:38.085785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.707 [2024-11-06 14:31:38.085828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.708 [2024-11-06 14:31:38.085862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.708 [2024-11-06 14:31:38.090422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.708 [2024-11-06 14:31:38.090466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.708 [2024-11-06 14:31:38.090481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.708 [2024-11-06 14:31:38.095101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.708 [2024-11-06 14:31:38.095145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.708 [2024-11-06 14:31:38.095160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.708 [2024-11-06 14:31:38.099758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.708 [2024-11-06 14:31:38.099803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.708 [2024-11-06 14:31:38.099817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.708 [2024-11-06 14:31:38.104434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.708 [2024-11-06 14:31:38.104479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.708 [2024-11-06 14:31:38.104494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.708 [2024-11-06 14:31:38.109061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.708 [2024-11-06 14:31:38.109105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.708 [2024-11-06 14:31:38.109120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.708 [2024-11-06 14:31:38.113653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.708 [2024-11-06 14:31:38.113696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.708 [2024-11-06 14:31:38.113712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.708 [2024-11-06 14:31:38.118354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.708 [2024-11-06 14:31:38.118526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.708 [2024-11-06 14:31:38.118545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.708 [2024-11-06 14:31:38.123165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.708 [2024-11-06 14:31:38.123211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.708 [2024-11-06 14:31:38.123227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.708 [2024-11-06 14:31:38.127754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.708 [2024-11-06 14:31:38.127799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.708 [2024-11-06 14:31:38.127813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.708 [2024-11-06 14:31:38.132426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.708 [2024-11-06 14:31:38.132471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.708 [2024-11-06 14:31:38.132486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.708 [2024-11-06 14:31:38.137117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.708 [2024-11-06 14:31:38.137162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.708 [2024-11-06 14:31:38.137178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.708 [2024-11-06 14:31:38.141762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.708 [2024-11-06 14:31:38.141804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.708 [2024-11-06 14:31:38.141818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.708 [2024-11-06 14:31:38.146422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.708 [2024-11-06 14:31:38.146467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.708 [2024-11-06 14:31:38.146483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.708 [2024-11-06 14:31:38.151081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.708 [2024-11-06 14:31:38.151126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.708 [2024-11-06 14:31:38.151141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.708 [2024-11-06 14:31:38.155666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.708 [2024-11-06 14:31:38.155712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.708 [2024-11-06 14:31:38.155728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.708 [2024-11-06 14:31:38.160334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.708 [2024-11-06 14:31:38.160505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.708 [2024-11-06 14:31:38.160524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.708 [2024-11-06 14:31:38.165147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.708 [2024-11-06 14:31:38.165192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.708 [2024-11-06 14:31:38.165215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.708 [2024-11-06 14:31:38.169849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.708 [2024-11-06 14:31:38.169889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.708 [2024-11-06 14:31:38.169904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.708 [2024-11-06 14:31:38.174440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.708 [2024-11-06 14:31:38.174484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.708 [2024-11-06 14:31:38.174508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.708 [2024-11-06 14:31:38.179099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.708 [2024-11-06 14:31:38.179258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.708 [2024-11-06 14:31:38.179279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.708 [2024-11-06 14:31:38.183868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.708 [2024-11-06 14:31:38.183910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.708 [2024-11-06 14:31:38.183925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.708 [2024-11-06 14:31:38.188502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.708 [2024-11-06 14:31:38.188550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.708 [2024-11-06 14:31:38.188567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.708 [2024-11-06 14:31:38.193119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.708 [2024-11-06 14:31:38.193164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.708 [2024-11-06 14:31:38.193179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.708 [2024-11-06 14:31:38.197889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.708 [2024-11-06 14:31:38.197931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.708 [2024-11-06 14:31:38.197946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.708 [2024-11-06 14:31:38.202601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.709 [2024-11-06 14:31:38.202644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.709 [2024-11-06 14:31:38.202659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.709 [2024-11-06 14:31:38.207251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.709 [2024-11-06 14:31:38.207294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.709 [2024-11-06 14:31:38.207309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.709 [2024-11-06 14:31:38.211982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.709 [2024-11-06 14:31:38.212024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.709 [2024-11-06 14:31:38.212039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.709 [2024-11-06 14:31:38.216718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.709 [2024-11-06 14:31:38.216763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.709 [2024-11-06 14:31:38.216778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.709 [2024-11-06 14:31:38.221406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.709 [2024-11-06 14:31:38.221564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.709 [2024-11-06 14:31:38.221583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.709 [2024-11-06 14:31:38.226158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.709 [2024-11-06 14:31:38.226202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.709 [2024-11-06 14:31:38.226217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.709 [2024-11-06 14:31:38.230924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.709 [2024-11-06 14:31:38.230966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.709 [2024-11-06 14:31:38.230981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.709 [2024-11-06 14:31:38.235600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.709 [2024-11-06 14:31:38.235646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.709 [2024-11-06 14:31:38.235662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.709 [2024-11-06 14:31:38.240292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.709 [2024-11-06 14:31:38.240337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.709 [2024-11-06 14:31:38.240352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.709 [2024-11-06 14:31:38.245019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.709 [2024-11-06 14:31:38.245062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.709 [2024-11-06 14:31:38.245077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.709 [2024-11-06 14:31:38.249675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.709 [2024-11-06 14:31:38.249720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.709 [2024-11-06 14:31:38.249735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.709 [2024-11-06 14:31:38.254397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.709 [2024-11-06 14:31:38.254440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.709 [2024-11-06 14:31:38.254455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.709 [2024-11-06 14:31:38.259075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.709 [2024-11-06 14:31:38.259119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.709 [2024-11-06 14:31:38.259135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.709 [2024-11-06 14:31:38.263677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.709 [2024-11-06 14:31:38.263725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.709 [2024-11-06 14:31:38.263741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.709 [2024-11-06 14:31:38.268290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.709 [2024-11-06 14:31:38.268335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.709 [2024-11-06 14:31:38.268350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.709 [2024-11-06 14:31:38.272992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.709 [2024-11-06 14:31:38.273035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.709 [2024-11-06 14:31:38.273050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.709 [2024-11-06 14:31:38.277616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.709 [2024-11-06 14:31:38.277662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.709 [2024-11-06 14:31:38.277678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.709 [2024-11-06 14:31:38.282250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.709 [2024-11-06 14:31:38.282408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.709 [2024-11-06 14:31:38.282428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.709 [2024-11-06 14:31:38.286962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.709 [2024-11-06 14:31:38.287006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.709 [2024-11-06 14:31:38.287022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.709 [2024-11-06 14:31:38.291598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.709 [2024-11-06 14:31:38.291645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.709 [2024-11-06 14:31:38.291661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.709 [2024-11-06 14:31:38.296275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.709 [2024-11-06 14:31:38.296320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.709 [2024-11-06 14:31:38.296335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.709 [2024-11-06 14:31:38.300975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.709 [2024-11-06 14:31:38.301131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.709 [2024-11-06 14:31:38.301152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.709 [2024-11-06 14:31:38.305766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.709 [2024-11-06 14:31:38.305810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.709 [2024-11-06 14:31:38.305826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.709 [2024-11-06 14:31:38.310482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.709 [2024-11-06 14:31:38.310535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.709 [2024-11-06 14:31:38.310552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.709 [2024-11-06 14:31:38.315140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.709 [2024-11-06 14:31:38.315186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.709 [2024-11-06 14:31:38.315202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.709 [2024-11-06 14:31:38.319769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.709 [2024-11-06 14:31:38.319816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.709 [2024-11-06 14:31:38.319832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.709 [2024-11-06 14:31:38.324518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.709 [2024-11-06 14:31:38.324566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.710 [2024-11-06 14:31:38.324582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.710 [2024-11-06 14:31:38.329225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.710 [2024-11-06 14:31:38.329270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.710 [2024-11-06 14:31:38.329285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.710 [2024-11-06 14:31:38.333883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.710 [2024-11-06 14:31:38.333922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.710 [2024-11-06 14:31:38.333937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.710 [2024-11-06 14:31:38.338517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.710 [2024-11-06 14:31:38.338560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.710 [2024-11-06 14:31:38.338575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.970 [2024-11-06 14:31:38.343164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.970 [2024-11-06 14:31:38.343321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.970 [2024-11-06 14:31:38.343341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.970 [2024-11-06 14:31:38.347985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.970 [2024-11-06 14:31:38.348029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.970 [2024-11-06 14:31:38.348044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.970 [2024-11-06 14:31:38.352673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.970 [2024-11-06 14:31:38.352729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.970 [2024-11-06 14:31:38.352746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.970 [2024-11-06 14:31:38.357390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.970 [2024-11-06 14:31:38.357436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.970 [2024-11-06 14:31:38.357467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.970 [2024-11-06 14:31:38.362073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.970 [2024-11-06 14:31:38.362234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.970 [2024-11-06 14:31:38.362254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.970 [2024-11-06 14:31:38.366906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.970 [2024-11-06 14:31:38.366950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.970 [2024-11-06 14:31:38.366965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.970 [2024-11-06 14:31:38.371546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.970 [2024-11-06 14:31:38.371593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.970 [2024-11-06 14:31:38.371608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.970 [2024-11-06 14:31:38.376181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.970 [2024-11-06 14:31:38.376226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.970 [2024-11-06 14:31:38.376241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.970 [2024-11-06 14:31:38.380808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.970 [2024-11-06 14:31:38.380864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.970 [2024-11-06 14:31:38.380880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.970 [2024-11-06 14:31:38.385505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.970 [2024-11-06 14:31:38.385551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.970 [2024-11-06 14:31:38.385566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.970 [2024-11-06 14:31:38.390200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.970 [2024-11-06 14:31:38.390243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.970 [2024-11-06 14:31:38.390259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.970 [2024-11-06 14:31:38.394907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.970 [2024-11-06 14:31:38.394948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.970 [2024-11-06 14:31:38.394963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.970 [2024-11-06 14:31:38.399504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.970 [2024-11-06 14:31:38.399549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.970 [2024-11-06 14:31:38.399564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.970 [2024-11-06 14:31:38.404163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.970 [2024-11-06 14:31:38.404319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.970 [2024-11-06 14:31:38.404338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.970 [2024-11-06 14:31:38.408922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.970 [2024-11-06 14:31:38.408964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.970 [2024-11-06 14:31:38.408979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.970 [2024-11-06 14:31:38.413615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.970 [2024-11-06 14:31:38.413660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.970 [2024-11-06 14:31:38.413676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.970 [2024-11-06 14:31:38.418305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.970 [2024-11-06 14:31:38.418348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.971 [2024-11-06 14:31:38.418363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.971 [2024-11-06 14:31:38.423094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.971 [2024-11-06 14:31:38.423251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.971 [2024-11-06 14:31:38.423271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.971 [2024-11-06 14:31:38.427944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.971 [2024-11-06 14:31:38.427986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.971 [2024-11-06 14:31:38.428002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.971 [2024-11-06 14:31:38.432587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.971 [2024-11-06 14:31:38.432632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.971 [2024-11-06 14:31:38.432647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.971 [2024-11-06 14:31:38.437222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.971 [2024-11-06 14:31:38.437267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.971 [2024-11-06 14:31:38.437282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.971 [2024-11-06 14:31:38.441908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.971 [2024-11-06 14:31:38.441948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.971 [2024-11-06 14:31:38.441963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.971 [2024-11-06 14:31:38.446495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.971 [2024-11-06 14:31:38.446563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.971 [2024-11-06 14:31:38.446577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.971 [2024-11-06 14:31:38.451198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.971 [2024-11-06 14:31:38.451243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.971 [2024-11-06 14:31:38.451259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.971 [2024-11-06 14:31:38.455874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.971 [2024-11-06 14:31:38.455915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.971 [2024-11-06 14:31:38.455930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.971 [2024-11-06 14:31:38.460504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.971 [2024-11-06 14:31:38.460549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.971 [2024-11-06 14:31:38.460563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.971 [2024-11-06 14:31:38.465124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.971 [2024-11-06 14:31:38.465169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.971 [2024-11-06 14:31:38.465183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.971 [2024-11-06 14:31:38.469796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.971 [2024-11-06 14:31:38.469854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.971 [2024-11-06 14:31:38.469870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.971 [2024-11-06 14:31:38.474407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.971 [2024-11-06 14:31:38.474451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.971 [2024-11-06 14:31:38.474465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.971 [2024-11-06 14:31:38.479063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.971 [2024-11-06 14:31:38.479106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.971 [2024-11-06 14:31:38.479121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.971 [2024-11-06 14:31:38.483694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.971 [2024-11-06 14:31:38.483737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.971 [2024-11-06 14:31:38.483752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.971 [2024-11-06 14:31:38.488329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.971 [2024-11-06 14:31:38.488374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.971 [2024-11-06 14:31:38.488389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.971 [2024-11-06 14:31:38.493020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.971 [2024-11-06 14:31:38.493062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.971 [2024-11-06 14:31:38.493076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.971 [2024-11-06 14:31:38.497696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.971 [2024-11-06 14:31:38.497742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.971 [2024-11-06 14:31:38.497758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.971 [2024-11-06 14:31:38.502318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.971 [2024-11-06 14:31:38.502360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.971 [2024-11-06 14:31:38.502375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.971 [2024-11-06 14:31:38.506958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.971 [2024-11-06 14:31:38.507117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.971 [2024-11-06 14:31:38.507136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.971 [2024-11-06 14:31:38.511628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.971 [2024-11-06 14:31:38.511674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.971 [2024-11-06 14:31:38.511690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.971 [2024-11-06 14:31:38.516254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.971 [2024-11-06 14:31:38.516298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.971 [2024-11-06 14:31:38.516313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.971 [2024-11-06 14:31:38.520901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.971 [2024-11-06 14:31:38.520941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.971 [2024-11-06 14:31:38.520956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.971 [2024-11-06 14:31:38.525535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.971 [2024-11-06 14:31:38.525578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.971 [2024-11-06 14:31:38.525594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.971 [2024-11-06 14:31:38.530175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.971 [2024-11-06 14:31:38.530218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.971 [2024-11-06 14:31:38.530234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.971 [2024-11-06 14:31:38.534925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.971 [2024-11-06 14:31:38.534967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.971 [2024-11-06 14:31:38.534983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.972 [2024-11-06 14:31:38.539473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.972 [2024-11-06 14:31:38.539519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.972 [2024-11-06 14:31:38.539534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.972 [2024-11-06 14:31:38.544132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.972 [2024-11-06 14:31:38.544176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.972 [2024-11-06 14:31:38.544191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.972 [2024-11-06 14:31:38.548788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.972 [2024-11-06 14:31:38.548832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.972 [2024-11-06 14:31:38.548877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.972 [2024-11-06 14:31:38.553423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.972 [2024-11-06 14:31:38.553468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.972 [2024-11-06 14:31:38.553483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.972 [2024-11-06 14:31:38.558049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.972 [2024-11-06 14:31:38.558105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.972 [2024-11-06 14:31:38.558120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.972 [2024-11-06 14:31:38.562712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.972 [2024-11-06 14:31:38.562756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.972 [2024-11-06 14:31:38.562772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.972 [2024-11-06 14:31:38.567281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.972 [2024-11-06 14:31:38.567327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.972 [2024-11-06 14:31:38.567342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.972 [2024-11-06 14:31:38.571966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.972 [2024-11-06 14:31:38.572148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.972 [2024-11-06 14:31:38.572168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.972 [2024-11-06 14:31:38.576765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.972 [2024-11-06 14:31:38.576811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.972 [2024-11-06 14:31:38.576826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.972 [2024-11-06 14:31:38.581436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.972 [2024-11-06 14:31:38.581481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.972 [2024-11-06 14:31:38.581495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.972 [2024-11-06 14:31:38.586148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.972 [2024-11-06 14:31:38.586191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.972 [2024-11-06 14:31:38.586206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.972 [2024-11-06 14:31:38.590828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.972 [2024-11-06 14:31:38.590884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.972 [2024-11-06 14:31:38.590899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.972 [2024-11-06 14:31:38.595504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.972 [2024-11-06 14:31:38.595661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.972 [2024-11-06 14:31:38.595680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.972 [2024-11-06 14:31:38.600277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:10.972 [2024-11-06 14:31:38.600322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.972 [2024-11-06 14:31:38.600337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.232 [2024-11-06 14:31:38.604942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:11.232 [2024-11-06 14:31:38.604983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.232 [2024-11-06 14:31:38.604999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.232 [2024-11-06 14:31:38.609583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:11.232 [2024-11-06 14:31:38.609627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.232 [2024-11-06 14:31:38.609642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.232 [2024-11-06 14:31:38.614272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:11.232 [2024-11-06 14:31:38.614426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.232 [2024-11-06 14:31:38.614446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.232 [2024-11-06 14:31:38.619025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:11.232 [2024-11-06 14:31:38.619067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.232 [2024-11-06 14:31:38.619083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.232 [2024-11-06 14:31:38.623652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:11.232 [2024-11-06 14:31:38.623700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.232 [2024-11-06 14:31:38.623715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.232 [2024-11-06 14:31:38.628336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:11.232 [2024-11-06 14:31:38.628386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.232 [2024-11-06 14:31:38.628402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.232 [2024-11-06 14:31:38.633084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:11.232 [2024-11-06 14:31:38.633231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.232 [2024-11-06 14:31:38.633254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.232 [2024-11-06 14:31:38.637925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:11.233 [2024-11-06 14:31:38.637967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.233 [2024-11-06 14:31:38.637983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.233 [2024-11-06 14:31:38.642628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:11.233 [2024-11-06 14:31:38.642670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.233 [2024-11-06 14:31:38.642686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.233 [2024-11-06 14:31:38.647346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:11.233 [2024-11-06 14:31:38.647392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.233 [2024-11-06 14:31:38.647408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.233 [2024-11-06 14:31:38.652024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:11.233 [2024-11-06 14:31:38.652181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.233 [2024-11-06 14:31:38.652201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.233 [2024-11-06 14:31:38.656825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:11.233 [2024-11-06 14:31:38.656873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.233 [2024-11-06 14:31:38.656889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.233 [2024-11-06 14:31:38.661426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:11.233 [2024-11-06 14:31:38.661469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.233 [2024-11-06 14:31:38.661483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.233 [2024-11-06 14:31:38.666165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:11.233 [2024-11-06 14:31:38.666208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.233 [2024-11-06 14:31:38.666223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.233 [2024-11-06 14:31:38.670845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:11.233 [2024-11-06 14:31:38.670898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.233 [2024-11-06 14:31:38.670914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.233 [2024-11-06 14:31:38.675485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:11.233 [2024-11-06 14:31:38.675647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.233 [2024-11-06 14:31:38.675666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.233 [2024-11-06 14:31:38.680313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:11.233 [2024-11-06 14:31:38.680358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.233 [2024-11-06 14:31:38.680374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.233 [2024-11-06 14:31:38.684983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:11.233 [2024-11-06 14:31:38.685025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.233 [2024-11-06 14:31:38.685040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.233 [2024-11-06 14:31:38.689578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:11.233 [2024-11-06 14:31:38.689622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.233 [2024-11-06 14:31:38.689637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.233 [2024-11-06 14:31:38.694250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:11.233 [2024-11-06 14:31:38.694294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.233 [2024-11-06 14:31:38.694308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.233 [2024-11-06 14:31:38.698951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:11.233 [2024-11-06 14:31:38.698993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.233 [2024-11-06 14:31:38.699008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.233 [2024-11-06 14:31:38.703548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:11.233 [2024-11-06 14:31:38.703592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.233 [2024-11-06 14:31:38.703607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.233 [2024-11-06 14:31:38.708181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:11.233 [2024-11-06 14:31:38.708226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.233 [2024-11-06 14:31:38.708242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.233 [2024-11-06 14:31:38.712800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:11.233 [2024-11-06 14:31:38.712858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.233 [2024-11-06 14:31:38.712873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.233 [2024-11-06 14:31:38.717405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:11.233 [2024-11-06 14:31:38.717578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.233 [2024-11-06 14:31:38.717598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.233 [2024-11-06 14:31:38.722164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:11.233 [2024-11-06 14:31:38.722207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.233 [2024-11-06 14:31:38.722223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.233 [2024-11-06 14:31:38.726825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:11.233 [2024-11-06 14:31:38.726878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.233 [2024-11-06 14:31:38.726894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.233 [2024-11-06 14:31:38.731435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:11.233 [2024-11-06 14:31:38.731481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.233 [2024-11-06 14:31:38.731496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.233 [2024-11-06 14:31:38.736144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:11.233 [2024-11-06 14:31:38.736297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.233 [2024-11-06 14:31:38.736316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.233 [2024-11-06 14:31:38.740904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:11.233 [2024-11-06 14:31:38.740948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.233 [2024-11-06 14:31:38.740964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.233 [2024-11-06 14:31:38.745575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:11.233 [2024-11-06 14:31:38.745620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.233 [2024-11-06 14:31:38.745636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.233 [2024-11-06 14:31:38.750193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:11.233 [2024-11-06 14:31:38.750236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.233 [2024-11-06 14:31:38.750251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.233 [2024-11-06 14:31:38.754821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:11.233 [2024-11-06 14:31:38.754884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.233 [2024-11-06 14:31:38.754900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.234 [2024-11-06 14:31:38.759469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:11.234 [2024-11-06 14:31:38.759625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.234 [2024-11-06 14:31:38.759645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.234 [2024-11-06 14:31:38.764228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:11.234 [2024-11-06 14:31:38.764273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.234 [2024-11-06 14:31:38.764288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.234 [2024-11-06 14:31:38.768818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:11.234 [2024-11-06 14:31:38.768871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.234 [2024-11-06 14:31:38.768886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.234 [2024-11-06 14:31:38.773516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:11.234 [2024-11-06 14:31:38.773560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.234 [2024-11-06 14:31:38.773575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.234 [2024-11-06 14:31:38.778172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:11.234 [2024-11-06 14:31:38.778327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.234 [2024-11-06 14:31:38.778346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.234 [2024-11-06 14:31:38.782977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:11.234 [2024-11-06 14:31:38.783019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.234 [2024-11-06 14:31:38.783034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.234 [2024-11-06 14:31:38.787617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:11.234 [2024-11-06 14:31:38.787664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.234 [2024-11-06 14:31:38.787680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.234 [2024-11-06 14:31:38.792204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:11.234 [2024-11-06 14:31:38.792250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.234 [2024-11-06 14:31:38.792265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.234 [2024-11-06 14:31:38.796815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:11.234 [2024-11-06 14:31:38.796873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.234 [2024-11-06 14:31:38.796905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.234 [2024-11-06 14:31:38.801377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:11.234 [2024-11-06 14:31:38.801422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.234 [2024-11-06 14:31:38.801436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.234 [2024-11-06 14:31:38.805996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:11.234 [2024-11-06 14:31:38.806035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.234 [2024-11-06 14:31:38.806050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.234 [2024-11-06 14:31:38.810626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:11.234 [2024-11-06 14:31:38.810667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.234 [2024-11-06 14:31:38.810682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.234 [2024-11-06 14:31:38.815268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:11.234 [2024-11-06 14:31:38.815312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.234 [2024-11-06 14:31:38.815327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.234 [2024-11-06 14:31:38.819819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:11.234 [2024-11-06 14:31:38.819873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.234 [2024-11-06 14:31:38.819888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.234 [2024-11-06 14:31:38.824424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:11.234 [2024-11-06 14:31:38.824468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.234 [2024-11-06 14:31:38.824483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.234 [2024-11-06 14:31:38.829090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:11.234 [2024-11-06 14:31:38.829134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.234 [2024-11-06 14:31:38.829148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.234 [2024-11-06 14:31:38.833743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:11.234 [2024-11-06 14:31:38.833785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.234 [2024-11-06 14:31:38.833800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.234 [2024-11-06 14:31:38.838392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:11.234 [2024-11-06 14:31:38.838437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.234 [2024-11-06 14:31:38.838452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.234 [2024-11-06 14:31:38.843106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:11.234 [2024-11-06 14:31:38.843150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.234 [2024-11-06 14:31:38.843165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.234 [2024-11-06 14:31:38.847729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:11.234 [2024-11-06 14:31:38.847773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.234 [2024-11-06 14:31:38.847788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.234 [2024-11-06 14:31:38.852310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:11.234 [2024-11-06 14:31:38.852355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.234 [2024-11-06 14:31:38.852370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.234 [2024-11-06 14:31:38.857020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:11.235 [2024-11-06 14:31:38.857063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.235 [2024-11-06 14:31:38.857078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.235 [2024-11-06 14:31:38.861680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:11.235 [2024-11-06 14:31:38.861722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.235 [2024-11-06 14:31:38.861738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.494 [2024-11-06 14:31:38.866279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:11.495 [2024-11-06 14:31:38.866436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.495 [2024-11-06 14:31:38.866455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.495 [2024-11-06 14:31:38.871029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:11.495 [2024-11-06 14:31:38.871072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.495 [2024-11-06 14:31:38.871087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.495 [2024-11-06 14:31:38.875601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:11.495 [2024-11-06 14:31:38.875646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.495 [2024-11-06 14:31:38.875661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.495 [2024-11-06 14:31:38.880206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:11.495 [2024-11-06 14:31:38.880250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.495 [2024-11-06 14:31:38.880265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.495 [2024-11-06 14:31:38.884804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:11.495 [2024-11-06 14:31:38.884861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.495 [2024-11-06 14:31:38.884892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.495 [2024-11-06 14:31:38.889394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:11.495 [2024-11-06 14:31:38.889439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.495 [2024-11-06 14:31:38.889453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.495 [2024-11-06 14:31:38.893961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:11.495 [2024-11-06 14:31:38.894000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.495 [2024-11-06 14:31:38.894015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.495 [2024-11-06 14:31:38.898614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:11.495 [2024-11-06 14:31:38.898655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.495 [2024-11-06 14:31:38.898670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.495 [2024-11-06 14:31:38.903247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:11.495 [2024-11-06 14:31:38.903292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.495 [2024-11-06 14:31:38.903307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.495 [2024-11-06 14:31:38.907879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:11.495 [2024-11-06 14:31:38.907921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.495 [2024-11-06 14:31:38.907936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.495 [2024-11-06 14:31:38.912474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:11.495 [2024-11-06 14:31:38.912517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.495 [2024-11-06 14:31:38.912532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.495 [2024-11-06 14:31:38.917058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:11.495 [2024-11-06 14:31:38.917100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.495 [2024-11-06 14:31:38.917115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.495 [2024-11-06 14:31:38.921677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:11.495 [2024-11-06 14:31:38.921719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.495 [2024-11-06 14:31:38.921734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.495 [2024-11-06 14:31:38.926286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:11.495 [2024-11-06 14:31:38.926329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.495 [2024-11-06 14:31:38.926344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.495 [2024-11-06 14:31:38.930872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:11.495 [2024-11-06 14:31:38.930912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.495 [2024-11-06 14:31:38.930927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.495 [2024-11-06 14:31:38.935444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:11.495 [2024-11-06 14:31:38.935607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.495 [2024-11-06 14:31:38.935626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.495 [2024-11-06 14:31:38.940230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:11.495 [2024-11-06 14:31:38.940275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.495 [2024-11-06 14:31:38.940290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.495 [2024-11-06 14:31:38.944915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:11.495 [2024-11-06 14:31:38.944968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.495 [2024-11-06 14:31:38.944983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.495 [2024-11-06 14:31:38.949497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:11.495 [2024-11-06 14:31:38.949542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.495 [2024-11-06 14:31:38.949557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.495 [2024-11-06 14:31:38.954015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:11.495 [2024-11-06 14:31:38.954055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.495 [2024-11-06 14:31:38.954101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.495 [2024-11-06 14:31:38.958660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:11.495 [2024-11-06 14:31:38.958703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.495 [2024-11-06 14:31:38.958718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.495 [2024-11-06 14:31:38.963259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:11.495 [2024-11-06 14:31:38.963303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.495 [2024-11-06 14:31:38.963318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.495 [2024-11-06 14:31:38.967904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:11.495 [2024-11-06 14:31:38.967946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.495 [2024-11-06 14:31:38.967961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.495 [2024-11-06 14:31:38.972539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:11.495 [2024-11-06 14:31:38.972583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.495 [2024-11-06 14:31:38.972598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.495 [2024-11-06 14:31:38.977166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:11.495 [2024-11-06 14:31:38.977211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.495 [2024-11-06 14:31:38.977226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.495 [2024-11-06 14:31:38.981797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:11.495 [2024-11-06 14:31:38.981855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.495 [2024-11-06 14:31:38.981871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.495 [2024-11-06 14:31:38.986380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:11.496 [2024-11-06 14:31:38.986426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.496 [2024-11-06 14:31:38.986441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.496 6541.00 IOPS, 817.62 MiB/s 00:27:11.496 Latency(us) 00:27:11.496 [2024-11-06T14:31:39.131Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:11.496 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:11.496 nvme0n1 : 2.00 6539.81 817.48 0.00 0.00 2443.77 2184.53 8685.49 00:27:11.496 [2024-11-06T14:31:39.131Z] =================================================================================================================== 00:27:11.496 [2024-11-06T14:31:39.131Z] Total : 6539.81 817.48 0.00 0.00 2443.77 2184.53 8685.49 00:27:11.496 { 00:27:11.496 "results": [ 00:27:11.496 { 00:27:11.496 "job": "nvme0n1", 00:27:11.496 "core_mask": "0x2", 00:27:11.496 "workload": "randread", 00:27:11.496 "status": "finished", 00:27:11.496 "queue_depth": 16, 00:27:11.496 "io_size": 131072, 00:27:11.496 "runtime": 2.002809, 00:27:11.496 "iops": 6539.8148300711655, 00:27:11.496 "mibps": 817.4768537588957, 00:27:11.496 "io_failed": 0, 00:27:11.496 "io_timeout": 0, 00:27:11.496 "avg_latency_us": 2443.774030432311, 00:27:11.496 "min_latency_us": 2184.5333333333333, 00:27:11.496 "max_latency_us": 8685.493975903615 00:27:11.496 } 00:27:11.496 ], 00:27:11.496 "core_count": 1 00:27:11.496 } 00:27:11.496 14:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:11.496 14:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:11.496 14:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:11.496 | .driver_specific 00:27:11.496 | .nvme_error 00:27:11.496 | .status_code 00:27:11.496 | .command_transient_transport_error' 00:27:11.496 14:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:11.755 14:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 422 > 0 )) 00:27:11.755 14:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 87338 00:27:11.755 14:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 87338 ']' 00:27:11.755 14:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 87338 00:27:11.755 14:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:27:11.755 14:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:11.755 14:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 87338 00:27:11.755 killing process with pid 87338 00:27:11.755 Received shutdown signal, test time was about 2.000000 seconds 00:27:11.755 00:27:11.755 Latency(us) 00:27:11.755 [2024-11-06T14:31:39.390Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:11.755 [2024-11-06T14:31:39.390Z] =================================================================================================================== 00:27:11.755 [2024-11-06T14:31:39.390Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:11.755 14:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:27:11.755 14:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:27:11.755 14:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 87338' 00:27:11.755 14:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 87338 00:27:11.755 14:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 87338 00:27:13.135 14:31:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:27:13.135 14:31:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:13.135 14:31:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:13.135 14:31:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:13.135 14:31:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:13.135 14:31:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:27:13.135 14:31:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=87405 00:27:13.135 14:31:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 87405 /var/tmp/bperf.sock 00:27:13.135 14:31:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 87405 ']' 00:27:13.135 14:31:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:13.135 14:31:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:13.135 14:31:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:13.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:13.135 14:31:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:13.135 14:31:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:13.135 [2024-11-06 14:31:40.596093] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:27:13.135 [2024-11-06 14:31:40.596230] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87405 ] 00:27:13.394 [2024-11-06 14:31:40.779972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:13.394 [2024-11-06 14:31:40.920989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:13.653 [2024-11-06 14:31:41.162929] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:27:13.923 14:31:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:13.924 14:31:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:27:13.924 14:31:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:13.924 14:31:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:14.184 14:31:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:14.184 14:31:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.184 14:31:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:14.184 14:31:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.184 14:31:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:14.184 14:31:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:14.443 nvme0n1 00:27:14.443 14:31:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:14.443 14:31:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.443 14:31:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:14.443 14:31:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.443 14:31:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:14.443 14:31:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:14.443 Running I/O for 2 seconds... 00:27:14.443 [2024-11-06 14:31:42.042686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfef90 00:27:14.443 [2024-11-06 14:31:42.045049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.443 [2024-11-06 14:31:42.045110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.443 [2024-11-06 14:31:42.057354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:27:14.443 [2024-11-06 14:31:42.060243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.443 [2024-11-06 14:31:42.060294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:14.443 [2024-11-06 14:31:42.072683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfe2e8 00:27:14.443 [2024-11-06 14:31:42.075164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.443 [2024-11-06 14:31:42.075211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:14.703 [2024-11-06 14:31:42.087437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfda78 00:27:14.703 [2024-11-06 14:31:42.089705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.703 [2024-11-06 14:31:42.089885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:14.703 [2024-11-06 14:31:42.102070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfd208 00:27:14.703 [2024-11-06 14:31:42.104314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.703 [2024-11-06 14:31:42.104365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:14.703 [2024-11-06 14:31:42.116517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfc998 00:27:14.703 [2024-11-06 14:31:42.118874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:14163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.703 [2024-11-06 14:31:42.118917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:14.703 [2024-11-06 14:31:42.131118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfc128 00:27:14.703 [2024-11-06 14:31:42.133443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.703 [2024-11-06 14:31:42.133591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:14.703 [2024-11-06 14:31:42.145702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfb8b8 00:27:14.703 [2024-11-06 14:31:42.147925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.703 [2024-11-06 14:31:42.148081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:14.704 [2024-11-06 14:31:42.160281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfb048 00:27:14.704 [2024-11-06 14:31:42.162448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:6784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.704 [2024-11-06 14:31:42.162488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:14.704 [2024-11-06 14:31:42.174766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfa7d8 00:27:14.704 [2024-11-06 14:31:42.176925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:5613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.704 [2024-11-06 14:31:42.176967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:14.704 [2024-11-06 14:31:42.189193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf9f68 00:27:14.704 [2024-11-06 14:31:42.191447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:4277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.704 [2024-11-06 14:31:42.191503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:14.704 [2024-11-06 14:31:42.203747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf96f8 00:27:14.704 [2024-11-06 14:31:42.205895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.704 [2024-11-06 14:31:42.206035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:14.704 [2024-11-06 14:31:42.218309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf8e88 00:27:14.704 [2024-11-06 14:31:42.220425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:14923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.704 [2024-11-06 14:31:42.220467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:14.704 [2024-11-06 14:31:42.232732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf8618 00:27:14.704 [2024-11-06 14:31:42.234823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.704 [2024-11-06 14:31:42.234882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:14.704 [2024-11-06 14:31:42.247174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf7da8 00:27:14.704 [2024-11-06 14:31:42.249364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:10795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.704 [2024-11-06 14:31:42.249512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:14.704 [2024-11-06 14:31:42.261892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf7538 00:27:14.704 [2024-11-06 14:31:42.264057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:18929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.704 [2024-11-06 14:31:42.264200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:14.704 [2024-11-06 14:31:42.276534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf6cc8 00:27:14.704 [2024-11-06 14:31:42.278581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:18571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.704 [2024-11-06 14:31:42.278629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.704 [2024-11-06 14:31:42.290967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf6458 00:27:14.704 [2024-11-06 14:31:42.292989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:12969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.704 [2024-11-06 14:31:42.293031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:14.704 [2024-11-06 14:31:42.305389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf5be8 00:27:14.704 [2024-11-06 14:31:42.307510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:13930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.704 [2024-11-06 14:31:42.307552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:14.704 [2024-11-06 14:31:42.319941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf5378 00:27:14.704 [2024-11-06 14:31:42.322014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:5995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.704 [2024-11-06 14:31:42.322169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:14.704 [2024-11-06 14:31:42.334696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf4b08 00:27:14.704 [2024-11-06 14:31:42.336672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:18420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.704 [2024-11-06 14:31:42.336714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:14.976 [2024-11-06 14:31:42.349203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf4298 00:27:14.976 [2024-11-06 14:31:42.351158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:12921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.976 [2024-11-06 14:31:42.351200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:14.976 [2024-11-06 14:31:42.363624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf3a28 00:27:14.976 [2024-11-06 14:31:42.365540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:19965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.976 [2024-11-06 14:31:42.365590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:14.976 [2024-11-06 14:31:42.377975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf31b8 00:27:14.976 [2024-11-06 14:31:42.380011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:3728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.976 [2024-11-06 14:31:42.380155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:14.976 [2024-11-06 14:31:42.392705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf2948 00:27:14.977 [2024-11-06 14:31:42.394629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:19049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.977 [2024-11-06 14:31:42.394670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:14.977 [2024-11-06 14:31:42.407192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf20d8 00:27:14.977 [2024-11-06 14:31:42.409046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:14042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.977 [2024-11-06 14:31:42.409093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:14.977 [2024-11-06 14:31:42.421580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf1868 00:27:14.977 [2024-11-06 14:31:42.423452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:6048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.977 [2024-11-06 14:31:42.423495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:14.977 [2024-11-06 14:31:42.436031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf0ff8 00:27:14.977 [2024-11-06 14:31:42.437978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:21976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.977 [2024-11-06 14:31:42.438126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:14.977 [2024-11-06 14:31:42.450714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf0788 00:27:14.977 [2024-11-06 14:31:42.452545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.977 [2024-11-06 14:31:42.452595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:14.977 [2024-11-06 14:31:42.465173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beff18 00:27:14.977 [2024-11-06 14:31:42.466984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:20315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.977 [2024-11-06 14:31:42.467027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:14.977 [2024-11-06 14:31:42.479633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bef6a8 00:27:14.977 [2024-11-06 14:31:42.481429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:23103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.977 [2024-11-06 14:31:42.481470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:14.977 [2024-11-06 14:31:42.494048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beee38 00:27:14.977 [2024-11-06 14:31:42.495929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:19818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.977 [2024-11-06 14:31:42.496083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:14.977 [2024-11-06 14:31:42.508707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bee5c8 00:27:14.977 [2024-11-06 14:31:42.510478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:8182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.977 [2024-11-06 14:31:42.510528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.977 [2024-11-06 14:31:42.523188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bedd58 00:27:14.977 [2024-11-06 14:31:42.524926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:15418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.977 [2024-11-06 14:31:42.524967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:14.977 [2024-11-06 14:31:42.537553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bed4e8 00:27:14.977 [2024-11-06 14:31:42.539286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:17141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.977 [2024-11-06 14:31:42.539452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:14.977 [2024-11-06 14:31:42.552150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016becc78 00:27:14.977 [2024-11-06 14:31:42.553961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:10525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.977 [2024-11-06 14:31:42.554103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:14.977 [2024-11-06 14:31:42.566854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:27:14.977 [2024-11-06 14:31:42.568542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.977 [2024-11-06 14:31:42.568585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:14.977 [2024-11-06 14:31:42.580927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:27:14.977 [2024-11-06 14:31:42.582589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:4201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.977 [2024-11-06 14:31:42.582640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:14.977 [2024-11-06 14:31:42.595037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beb328 00:27:14.977 [2024-11-06 14:31:42.596668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:5900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.977 [2024-11-06 14:31:42.596814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:15.250 [2024-11-06 14:31:42.609428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beaab8 00:27:15.250 [2024-11-06 14:31:42.611080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.250 [2024-11-06 14:31:42.611226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:15.250 [2024-11-06 14:31:42.623623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bea248 00:27:15.250 [2024-11-06 14:31:42.625235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:7181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.250 [2024-11-06 14:31:42.625285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:15.250 [2024-11-06 14:31:42.637952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be99d8 00:27:15.250 [2024-11-06 14:31:42.639653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:24455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.250 [2024-11-06 14:31:42.639695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:15.250 [2024-11-06 14:31:42.652529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be9168 00:27:15.250 [2024-11-06 14:31:42.654124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.250 [2024-11-06 14:31:42.654163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:15.250 [2024-11-06 14:31:42.666805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be88f8 00:27:15.250 [2024-11-06 14:31:42.668354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:18099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.250 [2024-11-06 14:31:42.668403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:15.250 [2024-11-06 14:31:42.681256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be8088 00:27:15.250 [2024-11-06 14:31:42.682794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:1443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.250 [2024-11-06 14:31:42.682958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:15.250 [2024-11-06 14:31:42.695822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be7818 00:27:15.250 [2024-11-06 14:31:42.697373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:24547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.250 [2024-11-06 14:31:42.697416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:15.250 [2024-11-06 14:31:42.710280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be6fa8 00:27:15.250 [2024-11-06 14:31:42.711783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.250 [2024-11-06 14:31:42.711850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:15.250 [2024-11-06 14:31:42.724600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be6738 00:27:15.251 [2024-11-06 14:31:42.726093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:4279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.251 [2024-11-06 14:31:42.726238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:15.251 [2024-11-06 14:31:42.739193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be5ec8 00:27:15.251 [2024-11-06 14:31:42.740768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:4234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.251 [2024-11-06 14:31:42.740801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.251 [2024-11-06 14:31:42.753757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be5658 00:27:15.251 [2024-11-06 14:31:42.755216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:6268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.251 [2024-11-06 14:31:42.755264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:15.251 [2024-11-06 14:31:42.768268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be4de8 00:27:15.251 [2024-11-06 14:31:42.769689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:21863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.251 [2024-11-06 14:31:42.769733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:15.251 [2024-11-06 14:31:42.782730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be4578 00:27:15.251 [2024-11-06 14:31:42.784264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:21274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.251 [2024-11-06 14:31:42.784409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:15.251 [2024-11-06 14:31:42.797373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be3d08 00:27:15.251 [2024-11-06 14:31:42.798766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.251 [2024-11-06 14:31:42.798817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:15.251 [2024-11-06 14:31:42.811593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be3498 00:27:15.251 [2024-11-06 14:31:42.813033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:25346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.251 [2024-11-06 14:31:42.813073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:15.251 [2024-11-06 14:31:42.825582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be2c28 00:27:15.251 [2024-11-06 14:31:42.826986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:20924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.251 [2024-11-06 14:31:42.827029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:15.251 [2024-11-06 14:31:42.839873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be23b8 00:27:15.251 [2024-11-06 14:31:42.841209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.251 [2024-11-06 14:31:42.841361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:15.251 [2024-11-06 14:31:42.854213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be1b48 00:27:15.251 [2024-11-06 14:31:42.855544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.251 [2024-11-06 14:31:42.855585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:15.251 [2024-11-06 14:31:42.868616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be12d8 00:27:15.251 [2024-11-06 14:31:42.869933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:5589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.251 [2024-11-06 14:31:42.869973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:15.251 [2024-11-06 14:31:42.883063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be0a68 00:27:15.251 [2024-11-06 14:31:42.884337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:20668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.251 [2024-11-06 14:31:42.884389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:15.511 [2024-11-06 14:31:42.897807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:27:15.511 [2024-11-06 14:31:42.899342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:6603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.511 [2024-11-06 14:31:42.899388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:15.511 [2024-11-06 14:31:42.912995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bdf988 00:27:15.511 [2024-11-06 14:31:42.914440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:19738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.511 [2024-11-06 14:31:42.914483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:15.511 [2024-11-06 14:31:42.927744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bdf118 00:27:15.511 [2024-11-06 14:31:42.928992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:23044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.511 [2024-11-06 14:31:42.929041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:15.511 [2024-11-06 14:31:42.942240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bde8a8 00:27:15.511 [2024-11-06 14:31:42.943454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.511 [2024-11-06 14:31:42.943616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:15.511 [2024-11-06 14:31:42.956845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bde038 00:27:15.511 [2024-11-06 14:31:42.958043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.511 [2024-11-06 14:31:42.958186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:15.511 [2024-11-06 14:31:42.977394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bde038 00:27:15.511 [2024-11-06 14:31:42.979698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.511 [2024-11-06 14:31:42.979739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.511 [2024-11-06 14:31:42.991900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bde8a8 00:27:15.511 [2024-11-06 14:31:42.994169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:5454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.511 [2024-11-06 14:31:42.994216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:15.511 [2024-11-06 14:31:43.006318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bdf118 00:27:15.511 [2024-11-06 14:31:43.008583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:17056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.511 [2024-11-06 14:31:43.008735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:15.511 [2024-11-06 14:31:43.020928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bdf988 00:27:15.511 17333.00 IOPS, 67.71 MiB/s [2024-11-06T14:31:43.146Z] [2024-11-06 14:31:43.023184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:5443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.511 [2024-11-06 14:31:43.023219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:15.511 [2024-11-06 14:31:43.035409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:27:15.511 [2024-11-06 14:31:43.037762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:13164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.511 [2024-11-06 14:31:43.037864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:15.511 [2024-11-06 14:31:43.050062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be0a68 00:27:15.511 [2024-11-06 14:31:43.052388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:4131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.511 [2024-11-06 14:31:43.052437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:15.511 [2024-11-06 14:31:43.064668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be12d8 00:27:15.511 [2024-11-06 14:31:43.066895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:22249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.511 [2024-11-06 14:31:43.066937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:15.511 [2024-11-06 14:31:43.079155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be1b48 00:27:15.511 [2024-11-06 14:31:43.081324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:21875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.511 [2024-11-06 14:31:43.081374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:15.511 [2024-11-06 14:31:43.093653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be23b8 00:27:15.511 [2024-11-06 14:31:43.095821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:23556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.511 [2024-11-06 14:31:43.095878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:15.511 [2024-11-06 14:31:43.108191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be2c28 00:27:15.511 [2024-11-06 14:31:43.110465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:12195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.511 [2024-11-06 14:31:43.110514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:15.511 [2024-11-06 14:31:43.122804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be3498 00:27:15.511 [2024-11-06 14:31:43.124943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:11437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.511 [2024-11-06 14:31:43.124994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:15.511 [2024-11-06 14:31:43.137331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be3d08 00:27:15.511 [2024-11-06 14:31:43.139439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:12031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.511 [2024-11-06 14:31:43.139486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:15.772 [2024-11-06 14:31:43.151861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be4578 00:27:15.772 [2024-11-06 14:31:43.153951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:5190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.772 [2024-11-06 14:31:43.154100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:15.772 [2024-11-06 14:31:43.166470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be4de8 00:27:15.772 [2024-11-06 14:31:43.168667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:7070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.772 [2024-11-06 14:31:43.168722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:15.772 [2024-11-06 14:31:43.181144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be5658 00:27:15.772 [2024-11-06 14:31:43.183323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:7445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.772 [2024-11-06 14:31:43.183371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:15.772 [2024-11-06 14:31:43.195731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be5ec8 00:27:15.772 [2024-11-06 14:31:43.197770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:14927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.772 [2024-11-06 14:31:43.197814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:15.772 [2024-11-06 14:31:43.210196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be6738 00:27:15.772 [2024-11-06 14:31:43.212212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.772 [2024-11-06 14:31:43.212371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:15.772 [2024-11-06 14:31:43.224782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be6fa8 00:27:15.772 [2024-11-06 14:31:43.226931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:20626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.772 [2024-11-06 14:31:43.227084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:15.772 [2024-11-06 14:31:43.239532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be7818 00:27:15.772 [2024-11-06 14:31:43.241536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:14207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.772 [2024-11-06 14:31:43.241578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:15.772 [2024-11-06 14:31:43.254053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be8088 00:27:15.772 [2024-11-06 14:31:43.256019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:17275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.772 [2024-11-06 14:31:43.256066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:15.772 [2024-11-06 14:31:43.268489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be88f8 00:27:15.772 [2024-11-06 14:31:43.270422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.772 [2024-11-06 14:31:43.270467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:15.772 [2024-11-06 14:31:43.283023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be9168 00:27:15.772 [2024-11-06 14:31:43.285065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:1454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.772 [2024-11-06 14:31:43.285213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:15.772 [2024-11-06 14:31:43.297660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be99d8 00:27:15.772 [2024-11-06 14:31:43.299597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:23413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.772 [2024-11-06 14:31:43.299648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:15.772 [2024-11-06 14:31:43.312125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bea248 00:27:15.772 [2024-11-06 14:31:43.314023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:9288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.772 [2024-11-06 14:31:43.314069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:15.772 [2024-11-06 14:31:43.326533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beaab8 00:27:15.772 [2024-11-06 14:31:43.328408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:3096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.772 [2024-11-06 14:31:43.328448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:15.772 [2024-11-06 14:31:43.340942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beb328 00:27:15.772 [2024-11-06 14:31:43.342916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.772 [2024-11-06 14:31:43.343074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:15.772 [2024-11-06 14:31:43.355548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:27:15.772 [2024-11-06 14:31:43.357426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.772 [2024-11-06 14:31:43.357473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:15.772 [2024-11-06 14:31:43.370012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:27:15.772 [2024-11-06 14:31:43.371854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:9777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.772 [2024-11-06 14:31:43.371892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:15.772 [2024-11-06 14:31:43.384445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016becc78 00:27:15.772 [2024-11-06 14:31:43.386251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:8708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.772 [2024-11-06 14:31:43.386299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:15.772 [2024-11-06 14:31:43.398882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bed4e8 00:27:15.772 [2024-11-06 14:31:43.400767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.772 [2024-11-06 14:31:43.400815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:16.033 [2024-11-06 14:31:43.413423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bedd58 00:27:16.033 [2024-11-06 14:31:43.415202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.033 [2024-11-06 14:31:43.415242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:16.033 [2024-11-06 14:31:43.427824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bee5c8 00:27:16.033 [2024-11-06 14:31:43.429576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.033 [2024-11-06 14:31:43.429626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:16.033 [2024-11-06 14:31:43.442249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beee38 00:27:16.033 [2024-11-06 14:31:43.444102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.033 [2024-11-06 14:31:43.444258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:16.033 [2024-11-06 14:31:43.456907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bef6a8 00:27:16.033 [2024-11-06 14:31:43.458733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.033 [2024-11-06 14:31:43.458776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:16.033 [2024-11-06 14:31:43.471449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beff18 00:27:16.033 [2024-11-06 14:31:43.473148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.033 [2024-11-06 14:31:43.473197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:16.033 [2024-11-06 14:31:43.485863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf0788 00:27:16.033 [2024-11-06 14:31:43.487532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:10655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.033 [2024-11-06 14:31:43.487579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:16.033 [2024-11-06 14:31:43.500281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf0ff8 00:27:16.033 [2024-11-06 14:31:43.502067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:8863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.033 [2024-11-06 14:31:43.502213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:16.033 [2024-11-06 14:31:43.514953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf1868 00:27:16.033 [2024-11-06 14:31:43.516687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:7553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.033 [2024-11-06 14:31:43.516742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:16.033 [2024-11-06 14:31:43.529536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf20d8 00:27:16.033 [2024-11-06 14:31:43.531163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.033 [2024-11-06 14:31:43.531211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:16.033 [2024-11-06 14:31:43.543948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf2948 00:27:16.033 [2024-11-06 14:31:43.545548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:10883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.033 [2024-11-06 14:31:43.545590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:16.033 [2024-11-06 14:31:43.558370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf31b8 00:27:16.033 [2024-11-06 14:31:43.560085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:23911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.034 [2024-11-06 14:31:43.560240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:16.034 [2024-11-06 14:31:43.573016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf3a28 00:27:16.034 [2024-11-06 14:31:43.574689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:20804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.034 [2024-11-06 14:31:43.574736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:16.034 [2024-11-06 14:31:43.587611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf4298 00:27:16.034 [2024-11-06 14:31:43.589201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:18353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.034 [2024-11-06 14:31:43.589242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:16.034 [2024-11-06 14:31:43.602054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf4b08 00:27:16.034 [2024-11-06 14:31:43.603590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.034 [2024-11-06 14:31:43.603640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:16.034 [2024-11-06 14:31:43.616478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf5378 00:27:16.034 [2024-11-06 14:31:43.617997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:14643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.034 [2024-11-06 14:31:43.618151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:16.034 [2024-11-06 14:31:43.631036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf5be8 00:27:16.034 [2024-11-06 14:31:43.632655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:4284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.034 [2024-11-06 14:31:43.632699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:16.034 [2024-11-06 14:31:43.645598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf6458 00:27:16.034 [2024-11-06 14:31:43.647117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:9700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.034 [2024-11-06 14:31:43.647165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:16.034 [2024-11-06 14:31:43.660090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf6cc8 00:27:16.034 [2024-11-06 14:31:43.661543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:1467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.034 [2024-11-06 14:31:43.661591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:16.294 [2024-11-06 14:31:43.674483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf7538 00:27:16.294 [2024-11-06 14:31:43.675975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:4106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.294 [2024-11-06 14:31:43.676124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:16.294 [2024-11-06 14:31:43.689127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf7da8 00:27:16.294 [2024-11-06 14:31:43.690684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:25066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.294 [2024-11-06 14:31:43.690735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:16.294 [2024-11-06 14:31:43.703746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf8618 00:27:16.294 [2024-11-06 14:31:43.705206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:25523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.294 [2024-11-06 14:31:43.705252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:16.294 [2024-11-06 14:31:43.718266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf8e88 00:27:16.294 [2024-11-06 14:31:43.719666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:22414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.294 [2024-11-06 14:31:43.719709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:16.294 [2024-11-06 14:31:43.732760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf96f8 00:27:16.294 [2024-11-06 14:31:43.734270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:16668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.294 [2024-11-06 14:31:43.734429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:16.294 [2024-11-06 14:31:43.747312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf9f68 00:27:16.294 [2024-11-06 14:31:43.748669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:10620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.294 [2024-11-06 14:31:43.748718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:16.294 [2024-11-06 14:31:43.761677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfa7d8 00:27:16.294 [2024-11-06 14:31:43.763045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:20010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.294 [2024-11-06 14:31:43.763085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:16.294 [2024-11-06 14:31:43.776114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfb048 00:27:16.294 [2024-11-06 14:31:43.777553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:4097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.294 [2024-11-06 14:31:43.777601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:16.294 [2024-11-06 14:31:43.790756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfb8b8 00:27:16.294 [2024-11-06 14:31:43.792080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:17104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.294 [2024-11-06 14:31:43.792240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:16.294 [2024-11-06 14:31:43.805450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfc128 00:27:16.294 [2024-11-06 14:31:43.806753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:13524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.294 [2024-11-06 14:31:43.806797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:16.294 [2024-11-06 14:31:43.819965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfc998 00:27:16.294 [2024-11-06 14:31:43.821240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:9350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.294 [2024-11-06 14:31:43.821291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:16.294 [2024-11-06 14:31:43.834440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfd208 00:27:16.294 [2024-11-06 14:31:43.835846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:17929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.294 [2024-11-06 14:31:43.835894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:16.294 [2024-11-06 14:31:43.849105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfda78 00:27:16.294 [2024-11-06 14:31:43.850482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:18614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.294 [2024-11-06 14:31:43.850531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:16.294 [2024-11-06 14:31:43.863689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfe2e8 00:27:16.294 [2024-11-06 14:31:43.864932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:14336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.294 [2024-11-06 14:31:43.865096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:16.294 [2024-11-06 14:31:43.878294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:27:16.294 [2024-11-06 14:31:43.879496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:18993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.294 [2024-11-06 14:31:43.879540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:16.294 [2024-11-06 14:31:43.898730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfef90 00:27:16.294 [2024-11-06 14:31:43.901034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.294 [2024-11-06 14:31:43.901076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.294 [2024-11-06 14:31:43.913148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:27:16.294 [2024-11-06 14:31:43.915440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:7572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.294 [2024-11-06 14:31:43.915477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:16.554 [2024-11-06 14:31:43.927552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfe2e8 00:27:16.554 [2024-11-06 14:31:43.929811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:9151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.554 [2024-11-06 14:31:43.929865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:16.554 [2024-11-06 14:31:43.941965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfda78 00:27:16.554 [2024-11-06 14:31:43.944208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:19655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.554 [2024-11-06 14:31:43.944244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:16.554 [2024-11-06 14:31:43.956354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfd208 00:27:16.554 [2024-11-06 14:31:43.958581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:2348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.554 [2024-11-06 14:31:43.958616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:16.554 [2024-11-06 14:31:43.970748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfc998 00:27:16.554 [2024-11-06 14:31:43.972958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:14685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.554 [2024-11-06 14:31:43.972995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:16.554 [2024-11-06 14:31:43.985173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfc128 00:27:16.554 [2024-11-06 14:31:43.987370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:4993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.554 [2024-11-06 14:31:43.987407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:16.554 [2024-11-06 14:31:43.999574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfb8b8 00:27:16.554 [2024-11-06 14:31:44.001751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:15891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.554 [2024-11-06 14:31:44.001787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:16.554 [2024-11-06 14:31:44.013864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfb048 00:27:16.554 [2024-11-06 14:31:44.016022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:17573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:16.554 [2024-11-06 14:31:44.016055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:16.554 17395.00 IOPS, 67.95 MiB/s 00:27:16.554 Latency(us) 00:27:16.554 [2024-11-06T14:31:44.189Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:16.554 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:16.554 nvme0n1 : 2.00 17417.97 68.04 0.00 0.00 7342.78 2539.85 28846.37 00:27:16.554 [2024-11-06T14:31:44.189Z] =================================================================================================================== 00:27:16.554 [2024-11-06T14:31:44.189Z] Total : 17417.97 68.04 0.00 0.00 7342.78 2539.85 28846.37 00:27:16.554 { 00:27:16.554 "results": [ 00:27:16.554 { 00:27:16.554 "job": "nvme0n1", 00:27:16.554 "core_mask": "0x2", 00:27:16.554 "workload": "randwrite", 00:27:16.554 "status": "finished", 00:27:16.554 "queue_depth": 128, 00:27:16.554 "io_size": 4096, 00:27:16.554 "runtime": 2.004711, 00:27:16.554 "iops": 17417.971967031655, 00:27:16.554 "mibps": 68.0389529962174, 00:27:16.554 "io_failed": 0, 00:27:16.554 "io_timeout": 0, 00:27:16.554 "avg_latency_us": 7342.776737421075, 00:27:16.554 "min_latency_us": 2539.8489959839358, 00:27:16.554 "max_latency_us": 28846.367871485945 00:27:16.554 } 00:27:16.554 ], 00:27:16.554 "core_count": 1 00:27:16.554 } 00:27:16.554 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:16.554 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:16.554 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:16.554 | .driver_specific 00:27:16.554 | .nvme_error 00:27:16.554 | .status_code 00:27:16.554 | .command_transient_transport_error' 00:27:16.554 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:16.814 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 136 > 0 )) 00:27:16.814 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 87405 00:27:16.814 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 87405 ']' 00:27:16.814 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 87405 00:27:16.814 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:27:16.814 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:16.814 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 87405 00:27:16.814 killing process with pid 87405 00:27:16.814 Received shutdown signal, test time was about 2.000000 seconds 00:27:16.814 00:27:16.814 Latency(us) 00:27:16.814 [2024-11-06T14:31:44.449Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:16.814 [2024-11-06T14:31:44.449Z] =================================================================================================================== 00:27:16.814 [2024-11-06T14:31:44.449Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:16.814 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:27:16.814 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:27:16.814 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 87405' 00:27:16.814 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 87405 00:27:16.814 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 87405 00:27:18.195 14:31:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:27:18.195 14:31:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:18.195 14:31:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:18.195 14:31:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:18.195 14:31:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:18.195 14:31:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=87473 00:27:18.195 14:31:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 87473 /var/tmp/bperf.sock 00:27:18.195 14:31:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:27:18.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:18.195 14:31:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 87473 ']' 00:27:18.195 14:31:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:18.195 14:31:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:18.195 14:31:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:18.195 14:31:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:18.195 14:31:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:18.195 [2024-11-06 14:31:45.520114] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:27:18.195 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:18.195 Zero copy mechanism will not be used. 00:27:18.195 [2024-11-06 14:31:45.520230] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87473 ] 00:27:18.195 [2024-11-06 14:31:45.703180] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:18.454 [2024-11-06 14:31:45.844933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:18.454 [2024-11-06 14:31:46.075926] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:27:19.023 14:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:19.023 14:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:27:19.023 14:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:19.023 14:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:19.023 14:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:19.023 14:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.023 14:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:19.023 14:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.023 14:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:19.023 14:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:19.336 nvme0n1 00:27:19.336 14:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:19.336 14:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.336 14:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:19.336 14:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.336 14:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:19.336 14:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:19.336 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:19.336 Zero copy mechanism will not be used. 00:27:19.336 Running I/O for 2 seconds... 00:27:19.336 [2024-11-06 14:31:46.965053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.336 [2024-11-06 14:31:46.965602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.336 [2024-11-06 14:31:46.965651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.336 [2024-11-06 14:31:46.969830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.336 [2024-11-06 14:31:46.969945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.336 [2024-11-06 14:31:46.969986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.597 [2024-11-06 14:31:46.974605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.597 [2024-11-06 14:31:46.974685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.597 [2024-11-06 14:31:46.974715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.597 [2024-11-06 14:31:46.979481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.597 [2024-11-06 14:31:46.979565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.597 [2024-11-06 14:31:46.979594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.597 [2024-11-06 14:31:46.984390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.597 [2024-11-06 14:31:46.984474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.597 [2024-11-06 14:31:46.984514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.597 [2024-11-06 14:31:46.989283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.597 [2024-11-06 14:31:46.989400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.597 [2024-11-06 14:31:46.989440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.597 [2024-11-06 14:31:46.994172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.597 [2024-11-06 14:31:46.994340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.597 [2024-11-06 14:31:46.994368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.597 [2024-11-06 14:31:46.998980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.597 [2024-11-06 14:31:46.999166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.597 [2024-11-06 14:31:46.999193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.597 [2024-11-06 14:31:47.003276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.597 [2024-11-06 14:31:47.003668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.597 [2024-11-06 14:31:47.003711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.597 [2024-11-06 14:31:47.007899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.597 [2024-11-06 14:31:47.007981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.597 [2024-11-06 14:31:47.008016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.597 [2024-11-06 14:31:47.012670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.597 [2024-11-06 14:31:47.012760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.597 [2024-11-06 14:31:47.012788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.597 [2024-11-06 14:31:47.017453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.598 [2024-11-06 14:31:47.017527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.598 [2024-11-06 14:31:47.017555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.598 [2024-11-06 14:31:47.022302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.598 [2024-11-06 14:31:47.022400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.598 [2024-11-06 14:31:47.022438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.598 [2024-11-06 14:31:47.027110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.598 [2024-11-06 14:31:47.027181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.598 [2024-11-06 14:31:47.027218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.598 [2024-11-06 14:31:47.031988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.598 [2024-11-06 14:31:47.032083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.598 [2024-11-06 14:31:47.032111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.598 [2024-11-06 14:31:47.036830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.598 [2024-11-06 14:31:47.036957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.598 [2024-11-06 14:31:47.036985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.598 [2024-11-06 14:31:47.041697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.598 [2024-11-06 14:31:47.041904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.598 [2024-11-06 14:31:47.041940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.598 [2024-11-06 14:31:47.046722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.598 [2024-11-06 14:31:47.046934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.598 [2024-11-06 14:31:47.046970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.598 [2024-11-06 14:31:47.051732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.598 [2024-11-06 14:31:47.051948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.598 [2024-11-06 14:31:47.051976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.598 [2024-11-06 14:31:47.056827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.598 [2024-11-06 14:31:47.057019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.598 [2024-11-06 14:31:47.057047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.598 [2024-11-06 14:31:47.061183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.598 [2024-11-06 14:31:47.061606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.598 [2024-11-06 14:31:47.061633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.598 [2024-11-06 14:31:47.065733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.598 [2024-11-06 14:31:47.065809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.598 [2024-11-06 14:31:47.065857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.598 [2024-11-06 14:31:47.070539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.598 [2024-11-06 14:31:47.070605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.598 [2024-11-06 14:31:47.070641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.598 [2024-11-06 14:31:47.075342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.598 [2024-11-06 14:31:47.075419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.598 [2024-11-06 14:31:47.075447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.598 [2024-11-06 14:31:47.080162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.598 [2024-11-06 14:31:47.080253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.598 [2024-11-06 14:31:47.080281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.598 [2024-11-06 14:31:47.084920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.598 [2024-11-06 14:31:47.085022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.598 [2024-11-06 14:31:47.085057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.598 [2024-11-06 14:31:47.089641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.598 [2024-11-06 14:31:47.089755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.598 [2024-11-06 14:31:47.089793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.598 [2024-11-06 14:31:47.094451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.598 [2024-11-06 14:31:47.094538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.598 [2024-11-06 14:31:47.094565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.598 [2024-11-06 14:31:47.099237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.598 [2024-11-06 14:31:47.099314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.598 [2024-11-06 14:31:47.099342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.598 [2024-11-06 14:31:47.103391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.598 [2024-11-06 14:31:47.103921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.598 [2024-11-06 14:31:47.103961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.598 [2024-11-06 14:31:47.108155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.598 [2024-11-06 14:31:47.108261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.598 [2024-11-06 14:31:47.108295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.598 [2024-11-06 14:31:47.112944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.598 [2024-11-06 14:31:47.113029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.598 [2024-11-06 14:31:47.113057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.598 [2024-11-06 14:31:47.117742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.598 [2024-11-06 14:31:47.117826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.598 [2024-11-06 14:31:47.117866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.598 [2024-11-06 14:31:47.122586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.598 [2024-11-06 14:31:47.122684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.598 [2024-11-06 14:31:47.122720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.598 [2024-11-06 14:31:47.127450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.598 [2024-11-06 14:31:47.127530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.598 [2024-11-06 14:31:47.127565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.598 [2024-11-06 14:31:47.132294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.598 [2024-11-06 14:31:47.132504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.598 [2024-11-06 14:31:47.132531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.598 [2024-11-06 14:31:47.137215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.598 [2024-11-06 14:31:47.137359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.598 [2024-11-06 14:31:47.137388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.598 [2024-11-06 14:31:47.141502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.599 [2024-11-06 14:31:47.141943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.599 [2024-11-06 14:31:47.141971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.599 [2024-11-06 14:31:47.146166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.599 [2024-11-06 14:31:47.146231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.599 [2024-11-06 14:31:47.146267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.599 [2024-11-06 14:31:47.151026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.599 [2024-11-06 14:31:47.151092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.599 [2024-11-06 14:31:47.151128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.599 [2024-11-06 14:31:47.155849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.599 [2024-11-06 14:31:47.155924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.599 [2024-11-06 14:31:47.155953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.599 [2024-11-06 14:31:47.160800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.599 [2024-11-06 14:31:47.160908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.599 [2024-11-06 14:31:47.160937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.599 [2024-11-06 14:31:47.165813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.599 [2024-11-06 14:31:47.165892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.599 [2024-11-06 14:31:47.165928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.599 [2024-11-06 14:31:47.170802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.599 [2024-11-06 14:31:47.170883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.599 [2024-11-06 14:31:47.170919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.599 [2024-11-06 14:31:47.175574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.599 [2024-11-06 14:31:47.175653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.599 [2024-11-06 14:31:47.175681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.599 [2024-11-06 14:31:47.180444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.599 [2024-11-06 14:31:47.180525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.599 [2024-11-06 14:31:47.180552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.599 [2024-11-06 14:31:47.185253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.599 [2024-11-06 14:31:47.185339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.599 [2024-11-06 14:31:47.185376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.599 [2024-11-06 14:31:47.190185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.599 [2024-11-06 14:31:47.190275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.599 [2024-11-06 14:31:47.190310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.599 [2024-11-06 14:31:47.194429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.599 [2024-11-06 14:31:47.194970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.599 [2024-11-06 14:31:47.195003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.599 [2024-11-06 14:31:47.199152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.599 [2024-11-06 14:31:47.199263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.599 [2024-11-06 14:31:47.199290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.599 [2024-11-06 14:31:47.204002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.599 [2024-11-06 14:31:47.204070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.599 [2024-11-06 14:31:47.204110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.599 [2024-11-06 14:31:47.208769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.599 [2024-11-06 14:31:47.208873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.599 [2024-11-06 14:31:47.208909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.599 [2024-11-06 14:31:47.213561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.599 [2024-11-06 14:31:47.213663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.599 [2024-11-06 14:31:47.213691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.599 [2024-11-06 14:31:47.218426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.599 [2024-11-06 14:31:47.218549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.599 [2024-11-06 14:31:47.218576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.599 [2024-11-06 14:31:47.223325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.599 [2024-11-06 14:31:47.223401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.599 [2024-11-06 14:31:47.223435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.599 [2024-11-06 14:31:47.228272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.599 [2024-11-06 14:31:47.228451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.599 [2024-11-06 14:31:47.228488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.860 [2024-11-06 14:31:47.233272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.860 [2024-11-06 14:31:47.233353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.860 [2024-11-06 14:31:47.233379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.860 [2024-11-06 14:31:47.238187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.860 [2024-11-06 14:31:47.238351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.860 [2024-11-06 14:31:47.238379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.860 [2024-11-06 14:31:47.242965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.860 [2024-11-06 14:31:47.243123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.860 [2024-11-06 14:31:47.243150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.860 [2024-11-06 14:31:47.247166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.860 [2024-11-06 14:31:47.247575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.860 [2024-11-06 14:31:47.247616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.861 [2024-11-06 14:31:47.251711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.861 [2024-11-06 14:31:47.251781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.861 [2024-11-06 14:31:47.251817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.861 [2024-11-06 14:31:47.256461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.861 [2024-11-06 14:31:47.256549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.861 [2024-11-06 14:31:47.256576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.861 [2024-11-06 14:31:47.261280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.861 [2024-11-06 14:31:47.261354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.861 [2024-11-06 14:31:47.261382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.861 [2024-11-06 14:31:47.266084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.861 [2024-11-06 14:31:47.266158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.861 [2024-11-06 14:31:47.266196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.861 [2024-11-06 14:31:47.270851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.861 [2024-11-06 14:31:47.270937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.861 [2024-11-06 14:31:47.270972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.861 [2024-11-06 14:31:47.275530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.861 [2024-11-06 14:31:47.275640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.861 [2024-11-06 14:31:47.275668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.861 [2024-11-06 14:31:47.280312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.861 [2024-11-06 14:31:47.280406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.861 [2024-11-06 14:31:47.280434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.861 [2024-11-06 14:31:47.285017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.861 [2024-11-06 14:31:47.285085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.861 [2024-11-06 14:31:47.285120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.861 [2024-11-06 14:31:47.288973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.861 [2024-11-06 14:31:47.289041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.861 [2024-11-06 14:31:47.289077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.861 [2024-11-06 14:31:47.293717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.861 [2024-11-06 14:31:47.293800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.861 [2024-11-06 14:31:47.293828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.861 [2024-11-06 14:31:47.298515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.861 [2024-11-06 14:31:47.298602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.861 [2024-11-06 14:31:47.298629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.861 [2024-11-06 14:31:47.303278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.861 [2024-11-06 14:31:47.303355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.861 [2024-11-06 14:31:47.303382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.861 [2024-11-06 14:31:47.308019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.861 [2024-11-06 14:31:47.308090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.861 [2024-11-06 14:31:47.308126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.861 [2024-11-06 14:31:47.312911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.861 [2024-11-06 14:31:47.312977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.861 [2024-11-06 14:31:47.313012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.861 [2024-11-06 14:31:47.317716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.861 [2024-11-06 14:31:47.317847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.861 [2024-11-06 14:31:47.317887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.861 [2024-11-06 14:31:47.322444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.861 [2024-11-06 14:31:47.322632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.861 [2024-11-06 14:31:47.322659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.861 [2024-11-06 14:31:47.326631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.861 [2024-11-06 14:31:47.327037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.861 [2024-11-06 14:31:47.327080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.861 [2024-11-06 14:31:47.331176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.861 [2024-11-06 14:31:47.331259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.861 [2024-11-06 14:31:47.331295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.861 [2024-11-06 14:31:47.335831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.861 [2024-11-06 14:31:47.335933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.861 [2024-11-06 14:31:47.335960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.861 [2024-11-06 14:31:47.340630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.861 [2024-11-06 14:31:47.340725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.861 [2024-11-06 14:31:47.340752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.861 [2024-11-06 14:31:47.345459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.861 [2024-11-06 14:31:47.345544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.861 [2024-11-06 14:31:47.345579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.861 [2024-11-06 14:31:47.350146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.861 [2024-11-06 14:31:47.350306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.861 [2024-11-06 14:31:47.350342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.861 [2024-11-06 14:31:47.354919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.861 [2024-11-06 14:31:47.354999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.861 [2024-11-06 14:31:47.355027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.861 [2024-11-06 14:31:47.359656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.861 [2024-11-06 14:31:47.359833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.861 [2024-11-06 14:31:47.359874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.861 [2024-11-06 14:31:47.364409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.861 [2024-11-06 14:31:47.364623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.861 [2024-11-06 14:31:47.364659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.861 [2024-11-06 14:31:47.368741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.861 [2024-11-06 14:31:47.369192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.861 [2024-11-06 14:31:47.369233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.861 [2024-11-06 14:31:47.373320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.862 [2024-11-06 14:31:47.373390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.862 [2024-11-06 14:31:47.373426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.862 [2024-11-06 14:31:47.378310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.862 [2024-11-06 14:31:47.378383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.862 [2024-11-06 14:31:47.378411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.862 [2024-11-06 14:31:47.383198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.862 [2024-11-06 14:31:47.383311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.862 [2024-11-06 14:31:47.383338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.862 [2024-11-06 14:31:47.387875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.862 [2024-11-06 14:31:47.387943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.862 [2024-11-06 14:31:47.387981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.862 [2024-11-06 14:31:47.392653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.862 [2024-11-06 14:31:47.392719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.862 [2024-11-06 14:31:47.392755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.862 [2024-11-06 14:31:47.397346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.862 [2024-11-06 14:31:47.397428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.862 [2024-11-06 14:31:47.397455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.862 [2024-11-06 14:31:47.402227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.862 [2024-11-06 14:31:47.402301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.862 [2024-11-06 14:31:47.402329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.862 [2024-11-06 14:31:47.407068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.862 [2024-11-06 14:31:47.407162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.862 [2024-11-06 14:31:47.407197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.862 [2024-11-06 14:31:47.411807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.862 [2024-11-06 14:31:47.411968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.862 [2024-11-06 14:31:47.412003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.862 [2024-11-06 14:31:47.416506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.862 [2024-11-06 14:31:47.416668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.862 [2024-11-06 14:31:47.416695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.862 [2024-11-06 14:31:47.420629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.862 [2024-11-06 14:31:47.421029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.862 [2024-11-06 14:31:47.421063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.862 [2024-11-06 14:31:47.425142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.862 [2024-11-06 14:31:47.425210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.862 [2024-11-06 14:31:47.425246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.862 [2024-11-06 14:31:47.429958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.862 [2024-11-06 14:31:47.430027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.862 [2024-11-06 14:31:47.430062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.862 [2024-11-06 14:31:47.434707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.862 [2024-11-06 14:31:47.434779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.862 [2024-11-06 14:31:47.434806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.862 [2024-11-06 14:31:47.439402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.862 [2024-11-06 14:31:47.439505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.862 [2024-11-06 14:31:47.439533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.862 [2024-11-06 14:31:47.444177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.862 [2024-11-06 14:31:47.444278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.862 [2024-11-06 14:31:47.444305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.862 [2024-11-06 14:31:47.448956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.862 [2024-11-06 14:31:47.449022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.862 [2024-11-06 14:31:47.449062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.862 [2024-11-06 14:31:47.453991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.862 [2024-11-06 14:31:47.454065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.862 [2024-11-06 14:31:47.454100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.862 [2024-11-06 14:31:47.458912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.862 [2024-11-06 14:31:47.459014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.862 [2024-11-06 14:31:47.459040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.862 [2024-11-06 14:31:47.463809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.862 [2024-11-06 14:31:47.463953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.862 [2024-11-06 14:31:47.463981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.862 [2024-11-06 14:31:47.468560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.862 [2024-11-06 14:31:47.468740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.862 [2024-11-06 14:31:47.468776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.862 [2024-11-06 14:31:47.472769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.862 [2024-11-06 14:31:47.473175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.862 [2024-11-06 14:31:47.473216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.862 [2024-11-06 14:31:47.477344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.862 [2024-11-06 14:31:47.477424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.862 [2024-11-06 14:31:47.477452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.862 [2024-11-06 14:31:47.482073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.862 [2024-11-06 14:31:47.482151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.862 [2024-11-06 14:31:47.482179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.862 [2024-11-06 14:31:47.486849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.862 [2024-11-06 14:31:47.486918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.862 [2024-11-06 14:31:47.486954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.862 [2024-11-06 14:31:47.491641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:19.862 [2024-11-06 14:31:47.491708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.862 [2024-11-06 14:31:47.491744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.124 [2024-11-06 14:31:47.496423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.124 [2024-11-06 14:31:47.496499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.124 [2024-11-06 14:31:47.496527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.124 [2024-11-06 14:31:47.501222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.124 [2024-11-06 14:31:47.501314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.124 [2024-11-06 14:31:47.501341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.124 [2024-11-06 14:31:47.505921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.124 [2024-11-06 14:31:47.506012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.124 [2024-11-06 14:31:47.506048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.124 [2024-11-06 14:31:47.510709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.124 [2024-11-06 14:31:47.510784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.124 [2024-11-06 14:31:47.510822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.124 [2024-11-06 14:31:47.514895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.124 [2024-11-06 14:31:47.515404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.124 [2024-11-06 14:31:47.515437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.124 [2024-11-06 14:31:47.519507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.124 [2024-11-06 14:31:47.519610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.124 [2024-11-06 14:31:47.519637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.124 [2024-11-06 14:31:47.524229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.124 [2024-11-06 14:31:47.524306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.124 [2024-11-06 14:31:47.524334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.124 [2024-11-06 14:31:47.528860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.124 [2024-11-06 14:31:47.528937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.124 [2024-11-06 14:31:47.528972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.124 [2024-11-06 14:31:47.533582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.124 [2024-11-06 14:31:47.533664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.124 [2024-11-06 14:31:47.533700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.124 [2024-11-06 14:31:47.538404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.124 [2024-11-06 14:31:47.538546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.124 [2024-11-06 14:31:47.538574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.124 [2024-11-06 14:31:47.543335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.124 [2024-11-06 14:31:47.543413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.124 [2024-11-06 14:31:47.543441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.124 [2024-11-06 14:31:47.548300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.124 [2024-11-06 14:31:47.548429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.124 [2024-11-06 14:31:47.548465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.124 [2024-11-06 14:31:47.553214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.124 [2024-11-06 14:31:47.553284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.124 [2024-11-06 14:31:47.553320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.124 [2024-11-06 14:31:47.557979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.124 [2024-11-06 14:31:47.558116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.124 [2024-11-06 14:31:47.558144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.124 [2024-11-06 14:31:47.562678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.124 [2024-11-06 14:31:47.562849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.124 [2024-11-06 14:31:47.562876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.124 [2024-11-06 14:31:47.566880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.124 [2024-11-06 14:31:47.567262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.124 [2024-11-06 14:31:47.567303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.124 [2024-11-06 14:31:47.571396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.124 [2024-11-06 14:31:47.571465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.124 [2024-11-06 14:31:47.571503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.124 [2024-11-06 14:31:47.576085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.124 [2024-11-06 14:31:47.576165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.124 [2024-11-06 14:31:47.576193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.124 [2024-11-06 14:31:47.580781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.124 [2024-11-06 14:31:47.580877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.124 [2024-11-06 14:31:47.580904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.124 [2024-11-06 14:31:47.585484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.124 [2024-11-06 14:31:47.585565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.124 [2024-11-06 14:31:47.585600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.124 [2024-11-06 14:31:47.590245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.124 [2024-11-06 14:31:47.590337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.124 [2024-11-06 14:31:47.590373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.124 [2024-11-06 14:31:47.594944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.124 [2024-11-06 14:31:47.595025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.124 [2024-11-06 14:31:47.595062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.124 [2024-11-06 14:31:47.599633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.124 [2024-11-06 14:31:47.599722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.124 [2024-11-06 14:31:47.599749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.124 [2024-11-06 14:31:47.604382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.124 [2024-11-06 14:31:47.604468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.124 [2024-11-06 14:31:47.604494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.124 [2024-11-06 14:31:47.608516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.124 [2024-11-06 14:31:47.609080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.125 [2024-11-06 14:31:47.609121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.125 [2024-11-06 14:31:47.613578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.125 [2024-11-06 14:31:47.614104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.125 [2024-11-06 14:31:47.614144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.125 [2024-11-06 14:31:47.618189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.125 [2024-11-06 14:31:47.618289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.125 [2024-11-06 14:31:47.618317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.125 [2024-11-06 14:31:47.622972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.125 [2024-11-06 14:31:47.623046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.125 [2024-11-06 14:31:47.623074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.125 [2024-11-06 14:31:47.627695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.125 [2024-11-06 14:31:47.627766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.125 [2024-11-06 14:31:47.627803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.125 [2024-11-06 14:31:47.632444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.125 [2024-11-06 14:31:47.632516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.125 [2024-11-06 14:31:47.632556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.125 [2024-11-06 14:31:47.637168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.125 [2024-11-06 14:31:47.637260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.125 [2024-11-06 14:31:47.637287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.125 [2024-11-06 14:31:47.641914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.125 [2024-11-06 14:31:47.642082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.125 [2024-11-06 14:31:47.642110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.125 [2024-11-06 14:31:47.646597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.125 [2024-11-06 14:31:47.646764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.125 [2024-11-06 14:31:47.646800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.125 [2024-11-06 14:31:47.650704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.125 [2024-11-06 14:31:47.651102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.125 [2024-11-06 14:31:47.651136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.125 [2024-11-06 14:31:47.655513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.125 [2024-11-06 14:31:47.656063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.125 [2024-11-06 14:31:47.656095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.125 [2024-11-06 14:31:47.660228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.125 [2024-11-06 14:31:47.660308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.125 [2024-11-06 14:31:47.660336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.125 [2024-11-06 14:31:47.664961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.125 [2024-11-06 14:31:47.665038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.125 [2024-11-06 14:31:47.665066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.125 [2024-11-06 14:31:47.669693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.125 [2024-11-06 14:31:47.669758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.125 [2024-11-06 14:31:47.669795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.125 [2024-11-06 14:31:47.674541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.125 [2024-11-06 14:31:47.674613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.125 [2024-11-06 14:31:47.674649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.125 [2024-11-06 14:31:47.679259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.125 [2024-11-06 14:31:47.679348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.125 [2024-11-06 14:31:47.679375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.125 [2024-11-06 14:31:47.683972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.125 [2024-11-06 14:31:47.684068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.125 [2024-11-06 14:31:47.684095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.125 [2024-11-06 14:31:47.688629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.125 [2024-11-06 14:31:47.688789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.125 [2024-11-06 14:31:47.688827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.125 [2024-11-06 14:31:47.693414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.125 [2024-11-06 14:31:47.693524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.125 [2024-11-06 14:31:47.693562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.125 [2024-11-06 14:31:47.698265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.125 [2024-11-06 14:31:47.698436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.125 [2024-11-06 14:31:47.698463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.125 [2024-11-06 14:31:47.702437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.125 [2024-11-06 14:31:47.702862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.125 [2024-11-06 14:31:47.702894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.125 [2024-11-06 14:31:47.707048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.125 [2024-11-06 14:31:47.707115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.125 [2024-11-06 14:31:47.707150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.125 [2024-11-06 14:31:47.711766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.125 [2024-11-06 14:31:47.711857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.125 [2024-11-06 14:31:47.711897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.125 [2024-11-06 14:31:47.716474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.125 [2024-11-06 14:31:47.716548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.125 [2024-11-06 14:31:47.716576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.125 [2024-11-06 14:31:47.721163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.125 [2024-11-06 14:31:47.721237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.125 [2024-11-06 14:31:47.721265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.125 [2024-11-06 14:31:47.725873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.125 [2024-11-06 14:31:47.725947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.125 [2024-11-06 14:31:47.725975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.125 [2024-11-06 14:31:47.730574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.125 [2024-11-06 14:31:47.730658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.125 [2024-11-06 14:31:47.730694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.125 [2024-11-06 14:31:47.735333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.126 [2024-11-06 14:31:47.735427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.126 [2024-11-06 14:31:47.735463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.126 [2024-11-06 14:31:47.740109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.126 [2024-11-06 14:31:47.740192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.126 [2024-11-06 14:31:47.740220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.126 [2024-11-06 14:31:47.744299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.126 [2024-11-06 14:31:47.744796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.126 [2024-11-06 14:31:47.744831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.126 [2024-11-06 14:31:47.748932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.126 [2024-11-06 14:31:47.749028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.126 [2024-11-06 14:31:47.749063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.126 [2024-11-06 14:31:47.753581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.126 [2024-11-06 14:31:47.753651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.126 [2024-11-06 14:31:47.753691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.386 [2024-11-06 14:31:47.758239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.386 [2024-11-06 14:31:47.758325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.386 [2024-11-06 14:31:47.758352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.386 [2024-11-06 14:31:47.763050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.386 [2024-11-06 14:31:47.763129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.386 [2024-11-06 14:31:47.763156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.386 [2024-11-06 14:31:47.767750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.386 [2024-11-06 14:31:47.767817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.386 [2024-11-06 14:31:47.767865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.386 [2024-11-06 14:31:47.772503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.386 [2024-11-06 14:31:47.772653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.386 [2024-11-06 14:31:47.772689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.386 [2024-11-06 14:31:47.777180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.386 [2024-11-06 14:31:47.777325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.386 [2024-11-06 14:31:47.777353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.386 [2024-11-06 14:31:47.781350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.386 [2024-11-06 14:31:47.781727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.386 [2024-11-06 14:31:47.781754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.386 [2024-11-06 14:31:47.785872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.386 [2024-11-06 14:31:47.785948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.386 [2024-11-06 14:31:47.785976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.386 [2024-11-06 14:31:47.790659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.386 [2024-11-06 14:31:47.790728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.386 [2024-11-06 14:31:47.790755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.386 [2024-11-06 14:31:47.795612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.387 [2024-11-06 14:31:47.795682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.387 [2024-11-06 14:31:47.795710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.387 [2024-11-06 14:31:47.800517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.387 [2024-11-06 14:31:47.800586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.387 [2024-11-06 14:31:47.800613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.387 [2024-11-06 14:31:47.805469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.387 [2024-11-06 14:31:47.805536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.387 [2024-11-06 14:31:47.805563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.387 [2024-11-06 14:31:47.810409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.387 [2024-11-06 14:31:47.810475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.387 [2024-11-06 14:31:47.810512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.387 [2024-11-06 14:31:47.815343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.387 [2024-11-06 14:31:47.815406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.387 [2024-11-06 14:31:47.815435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.387 [2024-11-06 14:31:47.820256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.387 [2024-11-06 14:31:47.820324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.387 [2024-11-06 14:31:47.820352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.387 [2024-11-06 14:31:47.825184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.387 [2024-11-06 14:31:47.825257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.387 [2024-11-06 14:31:47.825283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.387 [2024-11-06 14:31:47.830051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.387 [2024-11-06 14:31:47.830130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.387 [2024-11-06 14:31:47.830156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.387 [2024-11-06 14:31:47.834779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.387 [2024-11-06 14:31:47.834861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.387 [2024-11-06 14:31:47.834889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.387 [2024-11-06 14:31:47.839449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.387 [2024-11-06 14:31:47.839529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.387 [2024-11-06 14:31:47.839556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.387 [2024-11-06 14:31:47.844187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.387 [2024-11-06 14:31:47.844259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.387 [2024-11-06 14:31:47.844286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.387 [2024-11-06 14:31:47.848905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.387 [2024-11-06 14:31:47.848997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.387 [2024-11-06 14:31:47.849025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.387 [2024-11-06 14:31:47.853616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.387 [2024-11-06 14:31:47.853707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.387 [2024-11-06 14:31:47.853735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.387 [2024-11-06 14:31:47.858352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.387 [2024-11-06 14:31:47.858564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.387 [2024-11-06 14:31:47.858591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.387 [2024-11-06 14:31:47.862625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.387 [2024-11-06 14:31:47.863065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.387 [2024-11-06 14:31:47.863098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.387 [2024-11-06 14:31:47.867206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.387 [2024-11-06 14:31:47.867274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.387 [2024-11-06 14:31:47.867301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.387 [2024-11-06 14:31:47.872004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.387 [2024-11-06 14:31:47.872075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.387 [2024-11-06 14:31:47.872102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.387 [2024-11-06 14:31:47.876940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.387 [2024-11-06 14:31:47.877001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.387 [2024-11-06 14:31:47.877029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.387 [2024-11-06 14:31:47.881807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.387 [2024-11-06 14:31:47.881887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.387 [2024-11-06 14:31:47.881916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.387 [2024-11-06 14:31:47.886617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.387 [2024-11-06 14:31:47.886691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.387 [2024-11-06 14:31:47.886719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.387 [2024-11-06 14:31:47.891583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.387 [2024-11-06 14:31:47.891648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.387 [2024-11-06 14:31:47.891677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.387 [2024-11-06 14:31:47.896355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.387 [2024-11-06 14:31:47.896509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.387 [2024-11-06 14:31:47.896536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.387 [2024-11-06 14:31:47.901078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.387 [2024-11-06 14:31:47.901174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.387 [2024-11-06 14:31:47.901202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.387 [2024-11-06 14:31:47.905774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.387 [2024-11-06 14:31:47.905978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.388 [2024-11-06 14:31:47.906006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.388 [2024-11-06 14:31:47.910112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.388 [2024-11-06 14:31:47.910567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.388 [2024-11-06 14:31:47.910600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.388 [2024-11-06 14:31:47.914773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.388 [2024-11-06 14:31:47.914854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.388 [2024-11-06 14:31:47.914882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.388 [2024-11-06 14:31:47.919650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.388 [2024-11-06 14:31:47.919728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.388 [2024-11-06 14:31:47.919756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.388 [2024-11-06 14:31:47.924440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.388 [2024-11-06 14:31:47.924507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.388 [2024-11-06 14:31:47.924536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.388 [2024-11-06 14:31:47.929197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.388 [2024-11-06 14:31:47.929267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.388 [2024-11-06 14:31:47.929295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.388 [2024-11-06 14:31:47.934154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.388 [2024-11-06 14:31:47.934225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.388 [2024-11-06 14:31:47.934254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.388 [2024-11-06 14:31:47.938973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.388 [2024-11-06 14:31:47.939042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.388 [2024-11-06 14:31:47.939070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.388 [2024-11-06 14:31:47.943795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.388 [2024-11-06 14:31:47.943914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.388 [2024-11-06 14:31:47.943944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.388 [2024-11-06 14:31:47.948595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.388 [2024-11-06 14:31:47.948803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.388 [2024-11-06 14:31:47.948850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.388 6486.00 IOPS, 810.75 MiB/s [2024-11-06T14:31:48.023Z] [2024-11-06 14:31:47.954636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.388 [2024-11-06 14:31:47.954827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.388 [2024-11-06 14:31:47.954873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.388 [2024-11-06 14:31:47.958978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.388 [2024-11-06 14:31:47.959416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.388 [2024-11-06 14:31:47.959449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.388 [2024-11-06 14:31:47.963565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.388 [2024-11-06 14:31:47.963637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.388 [2024-11-06 14:31:47.963665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.388 [2024-11-06 14:31:47.968304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.388 [2024-11-06 14:31:47.968372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.388 [2024-11-06 14:31:47.968401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.388 [2024-11-06 14:31:47.973072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.388 [2024-11-06 14:31:47.973149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.388 [2024-11-06 14:31:47.973178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.388 [2024-11-06 14:31:47.977824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.388 [2024-11-06 14:31:47.977902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.388 [2024-11-06 14:31:47.977929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.388 [2024-11-06 14:31:47.982700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.388 [2024-11-06 14:31:47.982790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.388 [2024-11-06 14:31:47.982819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.388 [2024-11-06 14:31:47.987424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.388 [2024-11-06 14:31:47.987584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.388 [2024-11-06 14:31:47.987612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.388 [2024-11-06 14:31:47.992235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.388 [2024-11-06 14:31:47.992326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.388 [2024-11-06 14:31:47.992355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.388 [2024-11-06 14:31:47.997041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.388 [2024-11-06 14:31:47.997120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.388 [2024-11-06 14:31:47.997148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.388 [2024-11-06 14:31:48.001209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.388 [2024-11-06 14:31:48.001715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.388 [2024-11-06 14:31:48.001749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.388 [2024-11-06 14:31:48.005903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.388 [2024-11-06 14:31:48.005997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.388 [2024-11-06 14:31:48.006024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.388 [2024-11-06 14:31:48.010791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.388 [2024-11-06 14:31:48.010936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.388 [2024-11-06 14:31:48.010964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.388 [2024-11-06 14:31:48.015551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.388 [2024-11-06 14:31:48.015622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.388 [2024-11-06 14:31:48.015651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.649 [2024-11-06 14:31:48.020423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.649 [2024-11-06 14:31:48.020524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.649 [2024-11-06 14:31:48.020553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.649 [2024-11-06 14:31:48.025250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.649 [2024-11-06 14:31:48.025380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.649 [2024-11-06 14:31:48.025409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.649 [2024-11-06 14:31:48.030015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.649 [2024-11-06 14:31:48.030123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.649 [2024-11-06 14:31:48.030151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.649 [2024-11-06 14:31:48.034801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.649 [2024-11-06 14:31:48.034908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.649 [2024-11-06 14:31:48.034936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.649 [2024-11-06 14:31:48.039663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.649 [2024-11-06 14:31:48.039735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.649 [2024-11-06 14:31:48.039763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.649 [2024-11-06 14:31:48.043949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.649 [2024-11-06 14:31:48.044452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.649 [2024-11-06 14:31:48.044480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.649 [2024-11-06 14:31:48.048654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.649 [2024-11-06 14:31:48.048752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.649 [2024-11-06 14:31:48.048780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.649 [2024-11-06 14:31:48.053346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.649 [2024-11-06 14:31:48.053413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.649 [2024-11-06 14:31:48.053442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.649 [2024-11-06 14:31:48.058100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.649 [2024-11-06 14:31:48.058166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.649 [2024-11-06 14:31:48.058195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.649 [2024-11-06 14:31:48.062903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.649 [2024-11-06 14:31:48.062982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.649 [2024-11-06 14:31:48.063011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.650 [2024-11-06 14:31:48.067631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.650 [2024-11-06 14:31:48.067736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.650 [2024-11-06 14:31:48.067764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.650 [2024-11-06 14:31:48.072392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.650 [2024-11-06 14:31:48.072492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.650 [2024-11-06 14:31:48.072520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.650 [2024-11-06 14:31:48.077209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.650 [2024-11-06 14:31:48.077356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.650 [2024-11-06 14:31:48.077385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.650 [2024-11-06 14:31:48.081435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.650 [2024-11-06 14:31:48.081873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.650 [2024-11-06 14:31:48.081905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.650 [2024-11-06 14:31:48.086053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.650 [2024-11-06 14:31:48.086128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.650 [2024-11-06 14:31:48.086155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.650 [2024-11-06 14:31:48.090827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.650 [2024-11-06 14:31:48.090909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.650 [2024-11-06 14:31:48.090937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.650 [2024-11-06 14:31:48.095618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.650 [2024-11-06 14:31:48.095687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.650 [2024-11-06 14:31:48.095715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.650 [2024-11-06 14:31:48.100357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.650 [2024-11-06 14:31:48.100423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.650 [2024-11-06 14:31:48.100452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.650 [2024-11-06 14:31:48.105189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.650 [2024-11-06 14:31:48.105254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.650 [2024-11-06 14:31:48.105281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.650 [2024-11-06 14:31:48.109984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.650 [2024-11-06 14:31:48.110070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.650 [2024-11-06 14:31:48.110098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.650 [2024-11-06 14:31:48.114960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.650 [2024-11-06 14:31:48.115103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.650 [2024-11-06 14:31:48.115130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.650 [2024-11-06 14:31:48.119771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.650 [2024-11-06 14:31:48.119878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.650 [2024-11-06 14:31:48.119908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.650 [2024-11-06 14:31:48.124542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.650 [2024-11-06 14:31:48.124619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.650 [2024-11-06 14:31:48.124647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.650 [2024-11-06 14:31:48.128794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.650 [2024-11-06 14:31:48.129318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.650 [2024-11-06 14:31:48.129345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.650 [2024-11-06 14:31:48.133642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.650 [2024-11-06 14:31:48.133737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.650 [2024-11-06 14:31:48.133764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.650 [2024-11-06 14:31:48.138477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.650 [2024-11-06 14:31:48.138550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.650 [2024-11-06 14:31:48.138578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.650 [2024-11-06 14:31:48.143402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.650 [2024-11-06 14:31:48.143469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.650 [2024-11-06 14:31:48.143497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.650 [2024-11-06 14:31:48.148428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.650 [2024-11-06 14:31:48.148512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.650 [2024-11-06 14:31:48.148541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.650 [2024-11-06 14:31:48.153272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.650 [2024-11-06 14:31:48.153375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.650 [2024-11-06 14:31:48.153403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.650 [2024-11-06 14:31:48.158116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.650 [2024-11-06 14:31:48.158209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.650 [2024-11-06 14:31:48.158238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.650 [2024-11-06 14:31:48.163068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.650 [2024-11-06 14:31:48.163185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.650 [2024-11-06 14:31:48.163213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.650 [2024-11-06 14:31:48.167892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.650 [2024-11-06 14:31:48.167961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.650 [2024-11-06 14:31:48.167988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.650 [2024-11-06 14:31:48.172084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.650 [2024-11-06 14:31:48.172570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.650 [2024-11-06 14:31:48.172604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.650 [2024-11-06 14:31:48.176748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.650 [2024-11-06 14:31:48.176864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.650 [2024-11-06 14:31:48.176891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.650 [2024-11-06 14:31:48.181473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.650 [2024-11-06 14:31:48.181542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.650 [2024-11-06 14:31:48.181570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.650 [2024-11-06 14:31:48.186407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.650 [2024-11-06 14:31:48.186486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.650 [2024-11-06 14:31:48.186525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.650 [2024-11-06 14:31:48.191235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.650 [2024-11-06 14:31:48.191301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.650 [2024-11-06 14:31:48.191329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.650 [2024-11-06 14:31:48.196049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.651 [2024-11-06 14:31:48.196118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.651 [2024-11-06 14:31:48.196146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.651 [2024-11-06 14:31:48.200825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.651 [2024-11-06 14:31:48.201002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.651 [2024-11-06 14:31:48.201030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.651 [2024-11-06 14:31:48.205641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.651 [2024-11-06 14:31:48.205790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.651 [2024-11-06 14:31:48.205818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.651 [2024-11-06 14:31:48.209886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.651 [2024-11-06 14:31:48.210309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.651 [2024-11-06 14:31:48.210336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.651 [2024-11-06 14:31:48.214867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.651 [2024-11-06 14:31:48.215456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.651 [2024-11-06 14:31:48.215488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.651 [2024-11-06 14:31:48.219958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.651 [2024-11-06 14:31:48.220504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.651 [2024-11-06 14:31:48.220536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.651 [2024-11-06 14:31:48.224641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.651 [2024-11-06 14:31:48.224741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.651 [2024-11-06 14:31:48.224768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.651 [2024-11-06 14:31:48.229416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.651 [2024-11-06 14:31:48.229479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.651 [2024-11-06 14:31:48.229507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.651 [2024-11-06 14:31:48.234209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.651 [2024-11-06 14:31:48.234281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.651 [2024-11-06 14:31:48.234309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.651 [2024-11-06 14:31:48.239068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.651 [2024-11-06 14:31:48.239145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.651 [2024-11-06 14:31:48.239173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.651 [2024-11-06 14:31:48.243976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.651 [2024-11-06 14:31:48.244078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.651 [2024-11-06 14:31:48.244105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.651 [2024-11-06 14:31:48.248721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.651 [2024-11-06 14:31:48.248916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.651 [2024-11-06 14:31:48.248944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.651 [2024-11-06 14:31:48.253610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.651 [2024-11-06 14:31:48.253764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.651 [2024-11-06 14:31:48.253792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.651 [2024-11-06 14:31:48.257930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.651 [2024-11-06 14:31:48.258331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.651 [2024-11-06 14:31:48.258359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.651 [2024-11-06 14:31:48.262898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.651 [2024-11-06 14:31:48.263481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.651 [2024-11-06 14:31:48.263513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.651 [2024-11-06 14:31:48.267735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.651 [2024-11-06 14:31:48.267813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.651 [2024-11-06 14:31:48.267855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.651 [2024-11-06 14:31:48.272605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.651 [2024-11-06 14:31:48.272674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.651 [2024-11-06 14:31:48.272702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.651 [2024-11-06 14:31:48.277426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.651 [2024-11-06 14:31:48.277490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.651 [2024-11-06 14:31:48.277519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.651 [2024-11-06 14:31:48.282141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.651 [2024-11-06 14:31:48.282238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.651 [2024-11-06 14:31:48.282267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.912 [2024-11-06 14:31:48.287024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.912 [2024-11-06 14:31:48.287096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.912 [2024-11-06 14:31:48.287124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.912 [2024-11-06 14:31:48.291876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.912 [2024-11-06 14:31:48.291972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.912 [2024-11-06 14:31:48.291999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.912 [2024-11-06 14:31:48.296861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.912 [2024-11-06 14:31:48.296975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.912 [2024-11-06 14:31:48.297002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.912 [2024-11-06 14:31:48.301640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.912 [2024-11-06 14:31:48.301800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.912 [2024-11-06 14:31:48.301827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.912 [2024-11-06 14:31:48.305923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.912 [2024-11-06 14:31:48.306313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.912 [2024-11-06 14:31:48.306346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.912 [2024-11-06 14:31:48.310530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.912 [2024-11-06 14:31:48.310596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.912 [2024-11-06 14:31:48.310625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.912 [2024-11-06 14:31:48.315353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.912 [2024-11-06 14:31:48.315417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.912 [2024-11-06 14:31:48.315446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.912 [2024-11-06 14:31:48.320101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.912 [2024-11-06 14:31:48.320173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.912 [2024-11-06 14:31:48.320201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.912 [2024-11-06 14:31:48.324950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.912 [2024-11-06 14:31:48.325016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.912 [2024-11-06 14:31:48.325043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.912 [2024-11-06 14:31:48.329710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.912 [2024-11-06 14:31:48.329783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.912 [2024-11-06 14:31:48.329811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.912 [2024-11-06 14:31:48.334525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.912 [2024-11-06 14:31:48.334596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.912 [2024-11-06 14:31:48.334624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.912 [2024-11-06 14:31:48.339247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.912 [2024-11-06 14:31:48.339318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.912 [2024-11-06 14:31:48.339346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.912 [2024-11-06 14:31:48.344057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.912 [2024-11-06 14:31:48.344126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.912 [2024-11-06 14:31:48.344153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.912 [2024-11-06 14:31:48.348211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.912 [2024-11-06 14:31:48.348706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.912 [2024-11-06 14:31:48.348740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.912 [2024-11-06 14:31:48.352812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.912 [2024-11-06 14:31:48.352928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.912 [2024-11-06 14:31:48.352955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.912 [2024-11-06 14:31:48.357566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.912 [2024-11-06 14:31:48.357643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.912 [2024-11-06 14:31:48.357670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.912 [2024-11-06 14:31:48.362330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.912 [2024-11-06 14:31:48.362407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.912 [2024-11-06 14:31:48.362434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.912 [2024-11-06 14:31:48.367060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.912 [2024-11-06 14:31:48.367127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.912 [2024-11-06 14:31:48.367155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.912 [2024-11-06 14:31:48.371787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.912 [2024-11-06 14:31:48.371878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.912 [2024-11-06 14:31:48.371906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.912 [2024-11-06 14:31:48.376549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.912 [2024-11-06 14:31:48.376692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.912 [2024-11-06 14:31:48.376721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.912 [2024-11-06 14:31:48.381282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.912 [2024-11-06 14:31:48.381459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.912 [2024-11-06 14:31:48.381487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.912 [2024-11-06 14:31:48.386165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.912 [2024-11-06 14:31:48.386330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.912 [2024-11-06 14:31:48.386358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.912 [2024-11-06 14:31:48.391156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.913 [2024-11-06 14:31:48.391330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.913 [2024-11-06 14:31:48.391364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.913 [2024-11-06 14:31:48.395506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.913 [2024-11-06 14:31:48.395946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.913 [2024-11-06 14:31:48.395979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.913 [2024-11-06 14:31:48.400025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.913 [2024-11-06 14:31:48.400095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.913 [2024-11-06 14:31:48.400122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.913 [2024-11-06 14:31:48.404826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.913 [2024-11-06 14:31:48.404905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.913 [2024-11-06 14:31:48.404932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.913 [2024-11-06 14:31:48.409633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.913 [2024-11-06 14:31:48.409719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.913 [2024-11-06 14:31:48.409745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.913 [2024-11-06 14:31:48.414465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.913 [2024-11-06 14:31:48.414540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.913 [2024-11-06 14:31:48.414567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.913 [2024-11-06 14:31:48.419204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.913 [2024-11-06 14:31:48.419283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.913 [2024-11-06 14:31:48.419310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.913 [2024-11-06 14:31:48.423997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.913 [2024-11-06 14:31:48.424097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.913 [2024-11-06 14:31:48.424124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.913 [2024-11-06 14:31:48.428732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.913 [2024-11-06 14:31:48.428873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.913 [2024-11-06 14:31:48.428901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.913 [2024-11-06 14:31:48.433466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.913 [2024-11-06 14:31:48.433610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.913 [2024-11-06 14:31:48.433637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.913 [2024-11-06 14:31:48.437691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.913 [2024-11-06 14:31:48.438088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.913 [2024-11-06 14:31:48.438121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.913 [2024-11-06 14:31:48.442186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.913 [2024-11-06 14:31:48.442254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.913 [2024-11-06 14:31:48.442281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.913 [2024-11-06 14:31:48.446942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.913 [2024-11-06 14:31:48.447007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.913 [2024-11-06 14:31:48.447036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.913 [2024-11-06 14:31:48.451684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.913 [2024-11-06 14:31:48.451754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.913 [2024-11-06 14:31:48.451783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.913 [2024-11-06 14:31:48.456485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.913 [2024-11-06 14:31:48.456587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.913 [2024-11-06 14:31:48.456615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.913 [2024-11-06 14:31:48.461313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.913 [2024-11-06 14:31:48.461391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.913 [2024-11-06 14:31:48.461420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.913 [2024-11-06 14:31:48.466109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.913 [2024-11-06 14:31:48.466207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.913 [2024-11-06 14:31:48.466235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.913 [2024-11-06 14:31:48.470882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.913 [2024-11-06 14:31:48.471007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.913 [2024-11-06 14:31:48.471034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.913 [2024-11-06 14:31:48.475541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.913 [2024-11-06 14:31:48.475694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.913 [2024-11-06 14:31:48.475721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.913 [2024-11-06 14:31:48.479691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.913 [2024-11-06 14:31:48.480114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.913 [2024-11-06 14:31:48.480146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.913 [2024-11-06 14:31:48.484212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.913 [2024-11-06 14:31:48.484276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.913 [2024-11-06 14:31:48.484304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.913 [2024-11-06 14:31:48.489170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.913 [2024-11-06 14:31:48.489237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.913 [2024-11-06 14:31:48.489264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.913 [2024-11-06 14:31:48.494064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.913 [2024-11-06 14:31:48.494129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.913 [2024-11-06 14:31:48.494156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.913 [2024-11-06 14:31:48.499014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.913 [2024-11-06 14:31:48.499084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.913 [2024-11-06 14:31:48.499112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.913 [2024-11-06 14:31:48.503930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.913 [2024-11-06 14:31:48.503993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.913 [2024-11-06 14:31:48.504020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.913 [2024-11-06 14:31:48.508768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.913 [2024-11-06 14:31:48.508833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.913 [2024-11-06 14:31:48.508874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.913 [2024-11-06 14:31:48.513737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.914 [2024-11-06 14:31:48.513802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.914 [2024-11-06 14:31:48.513831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.914 [2024-11-06 14:31:48.518408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.914 [2024-11-06 14:31:48.518478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.914 [2024-11-06 14:31:48.518517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.914 [2024-11-06 14:31:48.523121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.914 [2024-11-06 14:31:48.523194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.914 [2024-11-06 14:31:48.523221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.914 [2024-11-06 14:31:48.527755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.914 [2024-11-06 14:31:48.527820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.914 [2024-11-06 14:31:48.527863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.914 [2024-11-06 14:31:48.532533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.914 [2024-11-06 14:31:48.532622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.914 [2024-11-06 14:31:48.532650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.914 [2024-11-06 14:31:48.537504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.914 [2024-11-06 14:31:48.537570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.914 [2024-11-06 14:31:48.537599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.914 [2024-11-06 14:31:48.542230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:20.914 [2024-11-06 14:31:48.542392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.914 [2024-11-06 14:31:48.542420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.174 [2024-11-06 14:31:48.546968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:21.174 [2024-11-06 14:31:48.547141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.174 [2024-11-06 14:31:48.547168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.174 [2024-11-06 14:31:48.551096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:21.174 [2024-11-06 14:31:48.551495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.174 [2024-11-06 14:31:48.551528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.174 [2024-11-06 14:31:48.555564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:21.174 [2024-11-06 14:31:48.555647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.174 [2024-11-06 14:31:48.555675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.174 [2024-11-06 14:31:48.560281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:21.174 [2024-11-06 14:31:48.560360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.174 [2024-11-06 14:31:48.560387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.174 [2024-11-06 14:31:48.565042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:21.174 [2024-11-06 14:31:48.565112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.174 [2024-11-06 14:31:48.565139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.174 [2024-11-06 14:31:48.569668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:21.175 [2024-11-06 14:31:48.569752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.175 [2024-11-06 14:31:48.569780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.175 [2024-11-06 14:31:48.574397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:21.175 [2024-11-06 14:31:48.574483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.175 [2024-11-06 14:31:48.574522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.175 [2024-11-06 14:31:48.579108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:21.175 [2024-11-06 14:31:48.579196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.175 [2024-11-06 14:31:48.579223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.175 [2024-11-06 14:31:48.583750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:21.175 [2024-11-06 14:31:48.583858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.175 [2024-11-06 14:31:48.583885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.175 [2024-11-06 14:31:48.588435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:21.175 [2024-11-06 14:31:48.588511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.175 [2024-11-06 14:31:48.588539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.175 [2024-11-06 14:31:48.592564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:21.175 [2024-11-06 14:31:48.593075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.175 [2024-11-06 14:31:48.593107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.175 [2024-11-06 14:31:48.597091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:21.175 [2024-11-06 14:31:48.597175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.175 [2024-11-06 14:31:48.597203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.175 [2024-11-06 14:31:48.601882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:21.175 [2024-11-06 14:31:48.601945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.175 [2024-11-06 14:31:48.601973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.175 [2024-11-06 14:31:48.606832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:21.175 [2024-11-06 14:31:48.606923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.175 [2024-11-06 14:31:48.606951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.175 [2024-11-06 14:31:48.611740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:21.175 [2024-11-06 14:31:48.611812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.175 [2024-11-06 14:31:48.611854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.175 [2024-11-06 14:31:48.616483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:21.175 [2024-11-06 14:31:48.616550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.175 [2024-11-06 14:31:48.616579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.175 [2024-11-06 14:31:48.621198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:21.175 [2024-11-06 14:31:48.621300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.175 [2024-11-06 14:31:48.621328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.175 [2024-11-06 14:31:48.625906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:21.175 [2024-11-06 14:31:48.625978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.175 [2024-11-06 14:31:48.626006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.175 [2024-11-06 14:31:48.630791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:21.175 [2024-11-06 14:31:48.630902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.175 [2024-11-06 14:31:48.630929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.175 [2024-11-06 14:31:48.635724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:21.175 [2024-11-06 14:31:48.635799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.175 [2024-11-06 14:31:48.635826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.175 [2024-11-06 14:31:48.640650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:21.175 [2024-11-06 14:31:48.640732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.175 [2024-11-06 14:31:48.640761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.175 [2024-11-06 14:31:48.645335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:21.175 [2024-11-06 14:31:48.645430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.175 [2024-11-06 14:31:48.645458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.175 [2024-11-06 14:31:48.650082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:21.175 [2024-11-06 14:31:48.650183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.175 [2024-11-06 14:31:48.650212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.175 [2024-11-06 14:31:48.654853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:21.175 [2024-11-06 14:31:48.655041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.175 [2024-11-06 14:31:48.655069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.175 [2024-11-06 14:31:48.659121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:21.175 [2024-11-06 14:31:48.659549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.175 [2024-11-06 14:31:48.659582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.175 [2024-11-06 14:31:48.663693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:21.175 [2024-11-06 14:31:48.663763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.175 [2024-11-06 14:31:48.663790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.175 [2024-11-06 14:31:48.668382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:21.175 [2024-11-06 14:31:48.668448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.175 [2024-11-06 14:31:48.668476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.175 [2024-11-06 14:31:48.673245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:21.175 [2024-11-06 14:31:48.673312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.175 [2024-11-06 14:31:48.673339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.176 [2024-11-06 14:31:48.677995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:21.176 [2024-11-06 14:31:48.678067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.176 [2024-11-06 14:31:48.678094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.176 [2024-11-06 14:31:48.682703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:21.176 [2024-11-06 14:31:48.682796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.176 [2024-11-06 14:31:48.682824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.176 [2024-11-06 14:31:48.687390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:21.176 [2024-11-06 14:31:48.687491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.176 [2024-11-06 14:31:48.687519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.176 [2024-11-06 14:31:48.692133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:21.176 [2024-11-06 14:31:48.692292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.176 [2024-11-06 14:31:48.692320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.176 [2024-11-06 14:31:48.696788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:21.176 [2024-11-06 14:31:48.696967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.176 [2024-11-06 14:31:48.696994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.176 [2024-11-06 14:31:48.700965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:21.176 [2024-11-06 14:31:48.701373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.176 [2024-11-06 14:31:48.701405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.176 [2024-11-06 14:31:48.705436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:21.176 [2024-11-06 14:31:48.705506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.176 [2024-11-06 14:31:48.705533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.176 [2024-11-06 14:31:48.710136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:21.176 [2024-11-06 14:31:48.710200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.176 [2024-11-06 14:31:48.710228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.176 [2024-11-06 14:31:48.714914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:21.176 [2024-11-06 14:31:48.714980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.176 [2024-11-06 14:31:48.715008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.176 [2024-11-06 14:31:48.719584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:21.176 [2024-11-06 14:31:48.719679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.176 [2024-11-06 14:31:48.719706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.176 [2024-11-06 14:31:48.724387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:21.176 [2024-11-06 14:31:48.724473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.176 [2024-11-06 14:31:48.724501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.176 [2024-11-06 14:31:48.729301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:21.176 [2024-11-06 14:31:48.729366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.176 [2024-11-06 14:31:48.729394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.176 [2024-11-06 14:31:48.734222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:21.176 [2024-11-06 14:31:48.734306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.176 [2024-11-06 14:31:48.734333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.176 [2024-11-06 14:31:48.739164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:21.176 [2024-11-06 14:31:48.739240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.176 [2024-11-06 14:31:48.739267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.176 [2024-11-06 14:31:48.744029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:21.176 [2024-11-06 14:31:48.744096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.176 [2024-11-06 14:31:48.744123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.176 [2024-11-06 14:31:48.748726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:21.176 [2024-11-06 14:31:48.748938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.176 [2024-11-06 14:31:48.748966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.176 [2024-11-06 14:31:48.753445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:21.176 [2024-11-06 14:31:48.753575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.176 [2024-11-06 14:31:48.753602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.176 [2024-11-06 14:31:48.757575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:21.176 [2024-11-06 14:31:48.758000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.176 [2024-11-06 14:31:48.758033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.176 [2024-11-06 14:31:48.762136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:21.176 [2024-11-06 14:31:48.762205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.176 [2024-11-06 14:31:48.762233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.176 [2024-11-06 14:31:48.766883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:21.176 [2024-11-06 14:31:48.766946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.176 [2024-11-06 14:31:48.766974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.176 [2024-11-06 14:31:48.771583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:21.176 [2024-11-06 14:31:48.771651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.176 [2024-11-06 14:31:48.771678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.176 [2024-11-06 14:31:48.776206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:21.176 [2024-11-06 14:31:48.776274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.176 [2024-11-06 14:31:48.776302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.176 [2024-11-06 14:31:48.780775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:21.176 [2024-11-06 14:31:48.780888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.176 [2024-11-06 14:31:48.780916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.176 [2024-11-06 14:31:48.785483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:21.176 [2024-11-06 14:31:48.785644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.176 [2024-11-06 14:31:48.785671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.176 [2024-11-06 14:31:48.790163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:21.176 [2024-11-06 14:31:48.790262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.177 [2024-11-06 14:31:48.790289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.177 [2024-11-06 14:31:48.794931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:21.177 [2024-11-06 14:31:48.795114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.177 [2024-11-06 14:31:48.795142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.177 [2024-11-06 14:31:48.799822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:21.177 [2024-11-06 14:31:48.800021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.177 [2024-11-06 14:31:48.800049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.177 [2024-11-06 14:31:48.804080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:21.177 [2024-11-06 14:31:48.804528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.177 [2024-11-06 14:31:48.804555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.436 [2024-11-06 14:31:48.808632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:21.436 [2024-11-06 14:31:48.808702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.436 [2024-11-06 14:31:48.808730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.436 [2024-11-06 14:31:48.813469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:21.436 [2024-11-06 14:31:48.813540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.436 [2024-11-06 14:31:48.813567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.436 [2024-11-06 14:31:48.818106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:21.436 [2024-11-06 14:31:48.818194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.436 [2024-11-06 14:31:48.818221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.436 [2024-11-06 14:31:48.822875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:21.436 [2024-11-06 14:31:48.822943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.436 [2024-11-06 14:31:48.822971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.436 [2024-11-06 14:31:48.827611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:21.436 [2024-11-06 14:31:48.827686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.436 [2024-11-06 14:31:48.827714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.436 [2024-11-06 14:31:48.832245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:21.436 [2024-11-06 14:31:48.832408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.436 [2024-11-06 14:31:48.832435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.436 [2024-11-06 14:31:48.837071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:21.436 [2024-11-06 14:31:48.837221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.436 [2024-11-06 14:31:48.837248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.436 [2024-11-06 14:31:48.842014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:21.436 [2024-11-06 14:31:48.842139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.436 [2024-11-06 14:31:48.842166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.436 [2024-11-06 14:31:48.846949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:21.436 [2024-11-06 14:31:48.847056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.436 [2024-11-06 14:31:48.847083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.436 [2024-11-06 14:31:48.851595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:21.437 [2024-11-06 14:31:48.851754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.437 [2024-11-06 14:31:48.851781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.437 [2024-11-06 14:31:48.855755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:21.437 [2024-11-06 14:31:48.856180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.437 [2024-11-06 14:31:48.856212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.437 [2024-11-06 14:31:48.860404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:21.437 [2024-11-06 14:31:48.860489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.437 [2024-11-06 14:31:48.860518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.437 [2024-11-06 14:31:48.865177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:21.437 [2024-11-06 14:31:48.865243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.437 [2024-11-06 14:31:48.865271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.437 [2024-11-06 14:31:48.869930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:21.437 [2024-11-06 14:31:48.869996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.437 [2024-11-06 14:31:48.870023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.437 [2024-11-06 14:31:48.874765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:21.437 [2024-11-06 14:31:48.874831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.437 [2024-11-06 14:31:48.874873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.437 [2024-11-06 14:31:48.879721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:21.437 [2024-11-06 14:31:48.879787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.437 [2024-11-06 14:31:48.879816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.437 [2024-11-06 14:31:48.884634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:21.437 [2024-11-06 14:31:48.884719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.437 [2024-11-06 14:31:48.884746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.437 [2024-11-06 14:31:48.889439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:21.437 [2024-11-06 14:31:48.889524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.437 [2024-11-06 14:31:48.889551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.437 [2024-11-06 14:31:48.894293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:21.437 [2024-11-06 14:31:48.894361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.437 [2024-11-06 14:31:48.894389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.437 [2024-11-06 14:31:48.898987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:21.437 [2024-11-06 14:31:48.899065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.437 [2024-11-06 14:31:48.899093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.437 [2024-11-06 14:31:48.903654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:21.437 [2024-11-06 14:31:48.903745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.437 [2024-11-06 14:31:48.903773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.437 [2024-11-06 14:31:48.908421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:21.437 [2024-11-06 14:31:48.908534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.437 [2024-11-06 14:31:48.908562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.437 [2024-11-06 14:31:48.913096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:21.437 [2024-11-06 14:31:48.913259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.437 [2024-11-06 14:31:48.913286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.437 [2024-11-06 14:31:48.917316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:21.437 [2024-11-06 14:31:48.917712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.437 [2024-11-06 14:31:48.917745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.437 [2024-11-06 14:31:48.921783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:21.437 [2024-11-06 14:31:48.921861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.437 [2024-11-06 14:31:48.921889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.437 [2024-11-06 14:31:48.926392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:21.437 [2024-11-06 14:31:48.926462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.437 [2024-11-06 14:31:48.926491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.437 [2024-11-06 14:31:48.931109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:21.437 [2024-11-06 14:31:48.931178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.437 [2024-11-06 14:31:48.931207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.437 [2024-11-06 14:31:48.935801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:21.437 [2024-11-06 14:31:48.935898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.437 [2024-11-06 14:31:48.935926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.437 [2024-11-06 14:31:48.940409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:21.437 [2024-11-06 14:31:48.940492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.437 [2024-11-06 14:31:48.940519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.437 [2024-11-06 14:31:48.945133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:21.437 [2024-11-06 14:31:48.945272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.437 [2024-11-06 14:31:48.945299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.437 6519.50 IOPS, 814.94 MiB/s [2024-11-06T14:31:49.072Z] [2024-11-06 14:31:48.950752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bfef90 00:27:21.437 [2024-11-06 14:31:48.950851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.437 [2024-11-06 14:31:48.950879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.437 00:27:21.437 Latency(us) 00:27:21.437 [2024-11-06T14:31:49.072Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:21.437 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:21.437 nvme0n1 : 2.00 6517.26 814.66 0.00 0.00 2450.40 1644.98 12054.41 00:27:21.437 [2024-11-06T14:31:49.072Z] =================================================================================================================== 00:27:21.437 [2024-11-06T14:31:49.072Z] Total : 6517.26 814.66 0.00 0.00 2450.40 1644.98 12054.41 00:27:21.437 { 00:27:21.437 "results": [ 00:27:21.437 { 00:27:21.437 "job": "nvme0n1", 00:27:21.437 "core_mask": "0x2", 00:27:21.437 "workload": "randwrite", 00:27:21.437 "status": "finished", 00:27:21.437 "queue_depth": 16, 00:27:21.437 "io_size": 131072, 00:27:21.437 "runtime": 2.004217, 00:27:21.437 "iops": 6517.258360746367, 00:27:21.437 "mibps": 814.6572950932958, 00:27:21.437 "io_failed": 0, 00:27:21.437 "io_timeout": 0, 00:27:21.437 "avg_latency_us": 2450.4037480806705, 00:27:21.437 "min_latency_us": 1644.9799196787149, 00:27:21.437 "max_latency_us": 12054.412851405623 00:27:21.438 } 00:27:21.438 ], 00:27:21.438 "core_count": 1 00:27:21.438 } 00:27:21.438 14:31:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:21.438 14:31:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:21.438 14:31:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:21.438 14:31:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:21.438 | .driver_specific 00:27:21.438 | .nvme_error 00:27:21.438 | .status_code 00:27:21.438 | .command_transient_transport_error' 00:27:21.697 14:31:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 421 > 0 )) 00:27:21.697 14:31:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 87473 00:27:21.697 14:31:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 87473 ']' 00:27:21.697 14:31:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 87473 00:27:21.697 14:31:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:27:21.697 14:31:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:21.697 14:31:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 87473 00:27:21.697 14:31:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:27:21.697 14:31:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:27:21.697 14:31:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 87473' 00:27:21.697 killing process with pid 87473 00:27:21.697 Received shutdown signal, test time was about 2.000000 seconds 00:27:21.697 00:27:21.697 Latency(us) 00:27:21.697 [2024-11-06T14:31:49.332Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:21.697 [2024-11-06T14:31:49.332Z] =================================================================================================================== 00:27:21.697 [2024-11-06T14:31:49.332Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:21.697 14:31:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 87473 00:27:21.697 14:31:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 87473 00:27:23.074 14:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 87239 00:27:23.074 14:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 87239 ']' 00:27:23.074 14:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 87239 00:27:23.074 14:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:27:23.074 14:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:23.074 14:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 87239 00:27:23.074 killing process with pid 87239 00:27:23.074 14:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:23.074 14:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:23.074 14:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 87239' 00:27:23.074 14:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 87239 00:27:23.074 14:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 87239 00:27:24.451 ************************************ 00:27:24.451 END TEST nvmf_digest_error 00:27:24.451 ************************************ 00:27:24.451 00:27:24.451 real 0m22.538s 00:27:24.451 user 0m40.730s 00:27:24.451 sys 0m5.817s 00:27:24.451 14:31:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:24.451 14:31:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:24.451 14:31:51 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:27:24.451 14:31:51 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:27:24.451 14:31:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:24.451 14:31:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:27:24.451 14:31:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:24.451 14:31:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:27:24.451 14:31:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:24.451 14:31:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:24.451 rmmod nvme_tcp 00:27:24.451 rmmod nvme_fabrics 00:27:24.451 rmmod nvme_keyring 00:27:24.451 14:31:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:24.451 14:31:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:27:24.451 14:31:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:27:24.451 14:31:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 87239 ']' 00:27:24.451 14:31:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 87239 00:27:24.451 14:31:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@952 -- # '[' -z 87239 ']' 00:27:24.451 14:31:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@956 -- # kill -0 87239 00:27:24.451 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (87239) - No such process 00:27:24.451 Process with pid 87239 is not found 00:27:24.451 14:31:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@979 -- # echo 'Process with pid 87239 is not found' 00:27:24.451 14:31:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:24.451 14:31:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:24.451 14:31:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:24.451 14:31:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:27:24.451 14:31:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:27:24.451 14:31:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:24.451 14:31:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:27:24.451 14:31:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:24.451 14:31:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:27:24.451 14:31:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:27:24.451 14:31:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:27:24.451 14:31:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:27:24.451 14:31:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:27:24.451 14:31:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:27:24.451 14:31:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:27:24.451 14:31:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:27:24.451 14:31:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:27:24.451 14:31:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:27:24.451 14:31:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:27:24.451 14:31:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:27:24.748 14:31:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:24.748 14:31:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:24.748 14:31:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 00:27:24.748 14:31:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:24.748 14:31:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:24.748 14:31:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:24.748 14:31:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 00:27:24.748 00:27:24.748 real 0m48.158s 00:27:24.748 user 1m25.478s 00:27:24.748 sys 0m12.200s 00:27:24.748 14:31:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:24.748 14:31:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:24.748 ************************************ 00:27:24.748 END TEST nvmf_digest 00:27:24.748 ************************************ 00:27:24.748 14:31:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:27:24.748 14:31:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:27:24.748 14:31:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:27:24.748 14:31:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:27:24.748 14:31:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:24.748 14:31:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.748 ************************************ 00:27:24.748 START TEST nvmf_host_multipath 00:27:24.748 ************************************ 00:27:24.748 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:27:25.008 * Looking for test storage... 00:27:25.008 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:27:25.008 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:25.008 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:27:25.008 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:25.008 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:25.008 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:25.008 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:25.008 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:25.008 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:27:25.008 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:27:25.008 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:27:25.008 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:27:25.008 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:27:25.008 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:27:25.008 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:27:25.008 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:25.008 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:27:25.008 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:27:25.008 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:25.008 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:25.008 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:27:25.008 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:27:25.008 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:25.008 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:27:25.008 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:27:25.008 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:27:25.008 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:27:25.008 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:25.008 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:27:25.008 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:27:25.008 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:25.008 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:25.008 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:27:25.008 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:25.008 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:25.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:25.008 --rc genhtml_branch_coverage=1 00:27:25.008 --rc genhtml_function_coverage=1 00:27:25.008 --rc genhtml_legend=1 00:27:25.008 --rc geninfo_all_blocks=1 00:27:25.008 --rc geninfo_unexecuted_blocks=1 00:27:25.008 00:27:25.008 ' 00:27:25.008 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:25.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:25.008 --rc genhtml_branch_coverage=1 00:27:25.008 --rc genhtml_function_coverage=1 00:27:25.008 --rc genhtml_legend=1 00:27:25.008 --rc geninfo_all_blocks=1 00:27:25.008 --rc geninfo_unexecuted_blocks=1 00:27:25.008 00:27:25.008 ' 00:27:25.008 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:25.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:25.008 --rc genhtml_branch_coverage=1 00:27:25.008 --rc genhtml_function_coverage=1 00:27:25.008 --rc genhtml_legend=1 00:27:25.008 --rc geninfo_all_blocks=1 00:27:25.008 --rc geninfo_unexecuted_blocks=1 00:27:25.008 00:27:25.008 ' 00:27:25.008 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:25.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:25.008 --rc genhtml_branch_coverage=1 00:27:25.008 --rc genhtml_function_coverage=1 00:27:25.008 --rc genhtml_legend=1 00:27:25.008 --rc geninfo_all_blocks=1 00:27:25.008 --rc geninfo_unexecuted_blocks=1 00:27:25.008 00:27:25.008 ' 00:27:25.008 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:25.008 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:27:25.008 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:25.008 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:25.008 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:25.008 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:25.008 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:25.008 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:25.008 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:25.008 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:25.008 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:25.008 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:25.008 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:27:25.008 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:27:25.008 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:25.008 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:25.008 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:25.008 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:25.008 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:25.008 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:27:25.008 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:25.008 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:25.008 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:25.008 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:25.008 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:25.008 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:25.008 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:27:25.009 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:25.009 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 00:27:25.009 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:25.009 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:25.009 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:25.009 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:25.009 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:25.009 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:25.009 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:25.009 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:25.009 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:25.009 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:25.009 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:25.009 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:25.009 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:25.009 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:27:25.009 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:25.009 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:27:25.009 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:27:25.009 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:25.009 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:25.009 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:25.009 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:25.009 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:25.009 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:25.009 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:25.009 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:25.009 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:27:25.009 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:27:25.009 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:27:25.009 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:27:25.009 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:27:25.009 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:27:25.009 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:25.009 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:27:25.009 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:27:25.009 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:27:25.009 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:25.009 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:27:25.009 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:25.009 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:27:25.009 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:25.009 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:27:25.009 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:25.009 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:25.009 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:25.009 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:25.009 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:25.009 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:25.009 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:27:25.009 Cannot find device "nvmf_init_br" 00:27:25.009 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:27:25.009 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:27:25.009 Cannot find device "nvmf_init_br2" 00:27:25.009 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:27:25.009 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:27:25.009 Cannot find device "nvmf_tgt_br" 00:27:25.009 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 00:27:25.009 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:27:25.009 Cannot find device "nvmf_tgt_br2" 00:27:25.009 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 00:27:25.009 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:27:25.269 Cannot find device "nvmf_init_br" 00:27:25.269 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:27:25.269 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:27:25.269 Cannot find device "nvmf_init_br2" 00:27:25.269 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:27:25.269 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:27:25.269 Cannot find device "nvmf_tgt_br" 00:27:25.269 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 00:27:25.269 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:27:25.269 Cannot find device "nvmf_tgt_br2" 00:27:25.269 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 00:27:25.269 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:27:25.269 Cannot find device "nvmf_br" 00:27:25.269 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 00:27:25.269 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:27:25.269 Cannot find device "nvmf_init_if" 00:27:25.269 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 00:27:25.269 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:27:25.269 Cannot find device "nvmf_init_if2" 00:27:25.269 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 00:27:25.269 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:25.269 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:25.269 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 00:27:25.269 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:25.269 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:25.269 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 00:27:25.269 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:27:25.269 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:25.269 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:27:25.269 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:25.269 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:25.269 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:25.269 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:25.269 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:25.269 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:27:25.269 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:27:25.269 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:27:25.269 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:27:25.269 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:27:25.269 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:27:25.528 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:27:25.528 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:27:25.528 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:27:25.528 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:25.528 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:25.528 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:25.528 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:27:25.528 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:27:25.528 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:27:25.528 14:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:27:25.528 14:31:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:25.528 14:31:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:25.528 14:31:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:25.528 14:31:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:27:25.528 14:31:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:27:25.529 14:31:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:27:25.529 14:31:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:25.529 14:31:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:27:25.529 14:31:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:27:25.529 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:25.529 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.108 ms 00:27:25.529 00:27:25.529 --- 10.0.0.3 ping statistics --- 00:27:25.529 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:25.529 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:27:25.529 14:31:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:27:25.529 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:27:25.529 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 00:27:25.529 00:27:25.529 --- 10.0.0.4 ping statistics --- 00:27:25.529 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:25.529 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:27:25.529 14:31:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:25.529 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:25.529 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:27:25.529 00:27:25.529 --- 10.0.0.1 ping statistics --- 00:27:25.529 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:25.529 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:27:25.529 14:31:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:27:25.529 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:25.529 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:27:25.529 00:27:25.529 --- 10.0.0.2 ping statistics --- 00:27:25.529 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:25.529 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:27:25.529 14:31:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:25.529 14:31:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@461 -- # return 0 00:27:25.529 14:31:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:25.529 14:31:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:25.529 14:31:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:25.529 14:31:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:25.529 14:31:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:25.529 14:31:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:25.529 14:31:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:25.529 14:31:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:27:25.529 14:31:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:25.529 14:31:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:25.529 14:31:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:27:25.529 14:31:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@509 -- # nvmfpid=87822 00:27:25.529 14:31:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:27:25.529 14:31:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@510 -- # waitforlisten 87822 00:27:25.529 14:31:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@833 -- # '[' -z 87822 ']' 00:27:25.529 14:31:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:25.529 14:31:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:25.529 14:31:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:25.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:25.529 14:31:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:25.529 14:31:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:27:25.788 [2024-11-06 14:31:53.249174] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:27:25.788 [2024-11-06 14:31:53.249295] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:26.047 [2024-11-06 14:31:53.433401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:26.047 [2024-11-06 14:31:53.562031] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:26.047 [2024-11-06 14:31:53.562088] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:26.047 [2024-11-06 14:31:53.562104] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:26.047 [2024-11-06 14:31:53.562141] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:26.047 [2024-11-06 14:31:53.562155] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:26.047 [2024-11-06 14:31:53.564387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:26.047 [2024-11-06 14:31:53.564423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:26.306 [2024-11-06 14:31:53.808253] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:27:26.565 14:31:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:26.565 14:31:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@866 -- # return 0 00:27:26.565 14:31:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:26.565 14:31:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:26.565 14:31:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:27:26.565 14:31:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:26.565 14:31:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=87822 00:27:26.565 14:31:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:26.824 [2024-11-06 14:31:54.304131] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:26.824 14:31:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:27:27.083 Malloc0 00:27:27.083 14:31:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:27:27.342 14:31:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:27.601 14:31:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:27:27.601 [2024-11-06 14:31:55.198229] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:27.601 14:31:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:27:27.860 [2024-11-06 14:31:55.394515] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:27:27.860 14:31:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=87872 00:27:27.860 14:31:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:27:27.860 14:31:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:27.860 14:31:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 87872 /var/tmp/bdevperf.sock 00:27:27.860 14:31:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@833 -- # '[' -z 87872 ']' 00:27:27.860 14:31:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:27.860 14:31:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:27.860 14:31:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:27.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:27.860 14:31:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:27.860 14:31:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:27:28.796 14:31:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:28.796 14:31:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@866 -- # return 0 00:27:28.796 14:31:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:27:29.055 14:31:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:27:29.314 Nvme0n1 00:27:29.314 14:31:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:27:29.573 Nvme0n1 00:27:29.573 14:31:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:27:29.573 14:31:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:27:30.950 14:31:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:27:30.950 14:31:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:27:30.950 14:31:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:27:30.950 14:31:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:27:30.950 14:31:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 87822 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:27:30.950 14:31:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=87912 00:27:30.950 14:31:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:27:37.516 14:32:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:27:37.516 14:32:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:27:37.516 14:32:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:27:37.516 14:32:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:37.516 Attaching 4 probes... 00:27:37.516 @path[10.0.0.3, 4421]: 17239 00:27:37.516 @path[10.0.0.3, 4421]: 17331 00:27:37.516 @path[10.0.0.3, 4421]: 17005 00:27:37.516 @path[10.0.0.3, 4421]: 17350 00:27:37.516 @path[10.0.0.3, 4421]: 17043 00:27:37.516 14:32:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:27:37.516 14:32:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:27:37.516 14:32:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:27:37.516 14:32:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:27:37.516 14:32:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:27:37.516 14:32:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:27:37.516 14:32:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 87912 00:27:37.516 14:32:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:37.516 14:32:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:27:37.516 14:32:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:27:37.516 14:32:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:27:37.776 14:32:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:27:37.776 14:32:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=88027 00:27:37.776 14:32:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:27:37.776 14:32:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 87822 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:27:44.385 14:32:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:27:44.385 14:32:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:27:44.385 14:32:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:27:44.385 14:32:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:44.385 Attaching 4 probes... 00:27:44.385 @path[10.0.0.3, 4420]: 17153 00:27:44.385 @path[10.0.0.3, 4420]: 17305 00:27:44.385 @path[10.0.0.3, 4420]: 17234 00:27:44.385 @path[10.0.0.3, 4420]: 17426 00:27:44.385 @path[10.0.0.3, 4420]: 17363 00:27:44.385 14:32:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:27:44.385 14:32:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:27:44.385 14:32:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:27:44.385 14:32:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:27:44.385 14:32:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:27:44.385 14:32:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:27:44.385 14:32:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 88027 00:27:44.385 14:32:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:44.385 14:32:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:27:44.385 14:32:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:27:44.385 14:32:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:27:44.385 14:32:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:27:44.385 14:32:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=88140 00:27:44.385 14:32:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 87822 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:27:44.385 14:32:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:27:50.954 14:32:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:27:50.954 14:32:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:27:50.954 14:32:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:27:50.954 14:32:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:50.954 Attaching 4 probes... 00:27:50.954 @path[10.0.0.3, 4421]: 15190 00:27:50.954 @path[10.0.0.3, 4421]: 19539 00:27:50.954 @path[10.0.0.3, 4421]: 19451 00:27:50.954 @path[10.0.0.3, 4421]: 19529 00:27:50.954 @path[10.0.0.3, 4421]: 19494 00:27:50.954 14:32:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:27:50.954 14:32:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:27:50.954 14:32:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:27:50.954 14:32:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:27:50.954 14:32:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:27:50.954 14:32:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:27:50.954 14:32:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 88140 00:27:50.954 14:32:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:50.954 14:32:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:27:50.954 14:32:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:27:50.954 14:32:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:27:51.213 14:32:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:27:51.213 14:32:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=88253 00:27:51.213 14:32:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 87822 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:27:51.213 14:32:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:27:57.802 14:32:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:27:57.802 14:32:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:27:57.802 14:32:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:27:57.802 14:32:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:57.802 Attaching 4 probes... 00:27:57.802 00:27:57.802 00:27:57.802 00:27:57.802 00:27:57.802 00:27:57.802 14:32:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:27:57.802 14:32:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:27:57.802 14:32:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:27:57.802 14:32:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:27:57.802 14:32:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:27:57.802 14:32:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:27:57.802 14:32:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 88253 00:27:57.802 14:32:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:57.802 14:32:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:27:57.802 14:32:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:27:57.802 14:32:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:27:57.802 14:32:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:27:57.803 14:32:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 87822 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:27:57.803 14:32:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=88366 00:27:57.803 14:32:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:28:04.371 14:32:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:28:04.371 14:32:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:28:04.371 14:32:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:28:04.371 14:32:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:04.371 Attaching 4 probes... 00:28:04.371 @path[10.0.0.3, 4421]: 18823 00:28:04.371 @path[10.0.0.3, 4421]: 19168 00:28:04.371 @path[10.0.0.3, 4421]: 19268 00:28:04.371 @path[10.0.0.3, 4421]: 19305 00:28:04.371 @path[10.0.0.3, 4421]: 19190 00:28:04.371 14:32:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:28:04.371 14:32:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:28:04.371 14:32:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:28:04.371 14:32:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:28:04.371 14:32:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:28:04.371 14:32:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:28:04.371 14:32:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 88366 00:28:04.371 14:32:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:04.371 14:32:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:28:04.371 [2024-11-06 14:32:31.758048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:28:04.371 [2024-11-06 14:32:31.758096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:28:04.371 [2024-11-06 14:32:31.758112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(6) to be set 00:28:04.371 14:32:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:28:05.309 14:32:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:28:05.309 14:32:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=88484 00:28:05.309 14:32:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 87822 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:28:05.309 14:32:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:28:11.876 14:32:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:28:11.876 14:32:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:28:11.876 14:32:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:28:11.876 14:32:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:11.876 Attaching 4 probes... 00:28:11.876 @path[10.0.0.3, 4420]: 19081 00:28:11.876 @path[10.0.0.3, 4420]: 19368 00:28:11.876 @path[10.0.0.3, 4420]: 19288 00:28:11.876 @path[10.0.0.3, 4420]: 19320 00:28:11.876 @path[10.0.0.3, 4420]: 19346 00:28:11.876 14:32:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:28:11.876 14:32:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:28:11.876 14:32:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:28:11.876 14:32:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:28:11.876 14:32:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:28:11.876 14:32:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:28:11.876 14:32:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 88484 00:28:11.876 14:32:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:11.876 14:32:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:28:11.876 [2024-11-06 14:32:39.229754] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:28:11.876 14:32:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:28:11.876 14:32:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:28:18.447 14:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:28:18.447 14:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=88658 00:28:18.447 14:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 87822 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:28:18.447 14:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:28:25.019 14:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:28:25.019 14:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:28:25.019 14:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:28:25.019 14:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:25.019 Attaching 4 probes... 00:28:25.019 @path[10.0.0.3, 4421]: 18817 00:28:25.019 @path[10.0.0.3, 4421]: 19189 00:28:25.019 @path[10.0.0.3, 4421]: 19168 00:28:25.019 @path[10.0.0.3, 4421]: 19139 00:28:25.019 @path[10.0.0.3, 4421]: 19097 00:28:25.019 14:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:28:25.019 14:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:28:25.019 14:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:28:25.019 14:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:28:25.019 14:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:28:25.019 14:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:28:25.019 14:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 88658 00:28:25.019 14:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:25.019 14:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 87872 00:28:25.019 14:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@952 -- # '[' -z 87872 ']' 00:28:25.019 14:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # kill -0 87872 00:28:25.019 14:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@957 -- # uname 00:28:25.019 14:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:25.019 14:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 87872 00:28:25.019 killing process with pid 87872 00:28:25.019 14:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:28:25.019 14:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:28:25.019 14:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@970 -- # echo 'killing process with pid 87872' 00:28:25.019 14:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@971 -- # kill 87872 00:28:25.019 14:32:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@976 -- # wait 87872 00:28:25.019 { 00:28:25.019 "results": [ 00:28:25.019 { 00:28:25.019 "job": "Nvme0n1", 00:28:25.019 "core_mask": "0x4", 00:28:25.019 "workload": "verify", 00:28:25.019 "status": "terminated", 00:28:25.019 "verify_range": { 00:28:25.019 "start": 0, 00:28:25.019 "length": 16384 00:28:25.019 }, 00:28:25.019 "queue_depth": 128, 00:28:25.019 "io_size": 4096, 00:28:25.019 "runtime": 54.619577, 00:28:25.019 "iops": 7954.858383469355, 00:28:25.019 "mibps": 31.07366556042717, 00:28:25.019 "io_failed": 0, 00:28:25.019 "io_timeout": 0, 00:28:25.019 "avg_latency_us": 16073.070347228715, 00:28:25.019 "min_latency_us": 835.6497991967872, 00:28:25.019 "max_latency_us": 7061253.963052209 00:28:25.019 } 00:28:25.019 ], 00:28:25.019 "core_count": 1 00:28:25.019 } 00:28:25.287 14:32:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 87872 00:28:25.287 14:32:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:28:25.287 [2024-11-06 14:31:55.511494] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:28:25.287 [2024-11-06 14:31:55.511628] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87872 ] 00:28:25.287 [2024-11-06 14:31:55.694870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:25.287 [2024-11-06 14:31:55.836236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:25.287 [2024-11-06 14:31:56.072470] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:28:25.287 Running I/O for 90 seconds... 00:28:25.287 8385.00 IOPS, 32.75 MiB/s [2024-11-06T14:32:52.922Z] 8644.00 IOPS, 33.77 MiB/s [2024-11-06T14:32:52.922Z] 8694.00 IOPS, 33.96 MiB/s [2024-11-06T14:32:52.922Z] 8681.00 IOPS, 33.91 MiB/s [2024-11-06T14:32:52.922Z] 8644.80 IOPS, 33.77 MiB/s [2024-11-06T14:32:52.922Z] 8649.33 IOPS, 33.79 MiB/s [2024-11-06T14:32:52.922Z] 8628.57 IOPS, 33.71 MiB/s [2024-11-06T14:32:52.922Z] [2024-11-06 14:32:05.234869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:27968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.287 [2024-11-06 14:32:05.234942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:25.287 [2024-11-06 14:32:05.235033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:27976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.287 [2024-11-06 14:32:05.235055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:25.287 [2024-11-06 14:32:05.235081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:27984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.287 [2024-11-06 14:32:05.235099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:25.287 [2024-11-06 14:32:05.235123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:27992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.287 [2024-11-06 14:32:05.235140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:25.287 [2024-11-06 14:32:05.235163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:28000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.287 [2024-11-06 14:32:05.235181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:25.287 [2024-11-06 14:32:05.235206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:28008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.288 [2024-11-06 14:32:05.235223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:25.288 [2024-11-06 14:32:05.235246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:28016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.288 [2024-11-06 14:32:05.235263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:25.288 [2024-11-06 14:32:05.235287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.288 [2024-11-06 14:32:05.235304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:25.288 [2024-11-06 14:32:05.235328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:27456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.288 [2024-11-06 14:32:05.235345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:25.288 [2024-11-06 14:32:05.235369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:27464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.288 [2024-11-06 14:32:05.235406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.288 [2024-11-06 14:32:05.235431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:27472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.288 [2024-11-06 14:32:05.235449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:25.288 [2024-11-06 14:32:05.235473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:27480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.288 [2024-11-06 14:32:05.235489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:25.288 [2024-11-06 14:32:05.235513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.288 [2024-11-06 14:32:05.235529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:25.288 [2024-11-06 14:32:05.235552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:27496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.288 [2024-11-06 14:32:05.235569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:25.288 [2024-11-06 14:32:05.235592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:27504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.288 [2024-11-06 14:32:05.235609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:25.288 [2024-11-06 14:32:05.235632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:27512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.288 [2024-11-06 14:32:05.235649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:25.288 [2024-11-06 14:32:05.235911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:28032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.288 [2024-11-06 14:32:05.235937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:25.288 [2024-11-06 14:32:05.235963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:28040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.288 [2024-11-06 14:32:05.235982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:25.288 [2024-11-06 14:32:05.236007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:28048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.288 [2024-11-06 14:32:05.236024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:25.288 [2024-11-06 14:32:05.236058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:28056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.288 [2024-11-06 14:32:05.236075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:25.288 [2024-11-06 14:32:05.236099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:28064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.288 [2024-11-06 14:32:05.236115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:25.288 [2024-11-06 14:32:05.236138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:28072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.288 [2024-11-06 14:32:05.236155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:25.288 [2024-11-06 14:32:05.236193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:28080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.288 [2024-11-06 14:32:05.236210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:25.288 [2024-11-06 14:32:05.236233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:28088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.288 [2024-11-06 14:32:05.236251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:25.288 [2024-11-06 14:32:05.236274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:27520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.288 [2024-11-06 14:32:05.236291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:25.288 [2024-11-06 14:32:05.236313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:27528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.288 [2024-11-06 14:32:05.236330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:25.288 [2024-11-06 14:32:05.236352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:27536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.288 [2024-11-06 14:32:05.236369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:25.288 [2024-11-06 14:32:05.236391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:27544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.288 [2024-11-06 14:32:05.236407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:25.288 [2024-11-06 14:32:05.236430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:27552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.288 [2024-11-06 14:32:05.236447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:25.288 [2024-11-06 14:32:05.236470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:27560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.288 [2024-11-06 14:32:05.236486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:25.288 [2024-11-06 14:32:05.236508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:27568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.288 [2024-11-06 14:32:05.236525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:25.288 [2024-11-06 14:32:05.236547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:27576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.288 [2024-11-06 14:32:05.236563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:25.288 [2024-11-06 14:32:05.236586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:27584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.288 [2024-11-06 14:32:05.236602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:25.288 [2024-11-06 14:32:05.236625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:27592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.288 [2024-11-06 14:32:05.236641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:25.288 [2024-11-06 14:32:05.236673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.288 [2024-11-06 14:32:05.236690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:25.288 [2024-11-06 14:32:05.236713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:27608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.288 [2024-11-06 14:32:05.236730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:25.288 [2024-11-06 14:32:05.236753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:27616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.288 [2024-11-06 14:32:05.236770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:25.288 [2024-11-06 14:32:05.236792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:27624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.288 [2024-11-06 14:32:05.236809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:25.288 [2024-11-06 14:32:05.236832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:27632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.288 [2024-11-06 14:32:05.236859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:25.288 [2024-11-06 14:32:05.236883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:27640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.288 [2024-11-06 14:32:05.236899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:25.288 [2024-11-06 14:32:05.236928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:28096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.288 [2024-11-06 14:32:05.236945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:25.288 [2024-11-06 14:32:05.236967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.288 [2024-11-06 14:32:05.236984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.288 [2024-11-06 14:32:05.237006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:28112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.288 [2024-11-06 14:32:05.237023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:25.288 [2024-11-06 14:32:05.237065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:28120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.288 [2024-11-06 14:32:05.237082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:25.288 [2024-11-06 14:32:05.237105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:28128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.289 [2024-11-06 14:32:05.237121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:25.289 [2024-11-06 14:32:05.237143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:28136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.289 [2024-11-06 14:32:05.237160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:25.289 [2024-11-06 14:32:05.237182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:28144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.289 [2024-11-06 14:32:05.237209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:25.289 [2024-11-06 14:32:05.237232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:28152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.289 [2024-11-06 14:32:05.237249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:25.289 [2024-11-06 14:32:05.237271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.289 [2024-11-06 14:32:05.237287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:25.289 [2024-11-06 14:32:05.237310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.289 [2024-11-06 14:32:05.237326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:25.289 [2024-11-06 14:32:05.237349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:28176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.289 [2024-11-06 14:32:05.237366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:25.289 [2024-11-06 14:32:05.237389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:28184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.289 [2024-11-06 14:32:05.237405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:25.289 [2024-11-06 14:32:05.237445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:28192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.289 [2024-11-06 14:32:05.237463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:25.289 [2024-11-06 14:32:05.237486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:28200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.289 [2024-11-06 14:32:05.237503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:25.289 [2024-11-06 14:32:05.237526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:28208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.289 [2024-11-06 14:32:05.237544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:25.289 [2024-11-06 14:32:05.237567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:28216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.289 [2024-11-06 14:32:05.237584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:25.289 [2024-11-06 14:32:05.237607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:27648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.289 [2024-11-06 14:32:05.237625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:25.289 [2024-11-06 14:32:05.237649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:27656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.289 [2024-11-06 14:32:05.237666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:25.289 [2024-11-06 14:32:05.237689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.289 [2024-11-06 14:32:05.237719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:25.289 [2024-11-06 14:32:05.237743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:27672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.289 [2024-11-06 14:32:05.237760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:25.289 [2024-11-06 14:32:05.237783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:27680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.289 [2024-11-06 14:32:05.237800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:25.289 [2024-11-06 14:32:05.237823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:27688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.289 [2024-11-06 14:32:05.237851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:25.289 [2024-11-06 14:32:05.237877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:27696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.289 [2024-11-06 14:32:05.237894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:25.289 [2024-11-06 14:32:05.237917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:27704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.289 [2024-11-06 14:32:05.237933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:25.289 [2024-11-06 14:32:05.237956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:28224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.289 [2024-11-06 14:32:05.237974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:25.289 [2024-11-06 14:32:05.237997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:28232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.289 [2024-11-06 14:32:05.238015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:25.289 [2024-11-06 14:32:05.238038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:28240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.289 [2024-11-06 14:32:05.238056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:25.289 [2024-11-06 14:32:05.238079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:28248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.289 [2024-11-06 14:32:05.238096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:25.289 [2024-11-06 14:32:05.238119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:28256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.289 [2024-11-06 14:32:05.238136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:25.289 [2024-11-06 14:32:05.238159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:28264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.289 [2024-11-06 14:32:05.238176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:25.289 [2024-11-06 14:32:05.238199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:28272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.289 [2024-11-06 14:32:05.238225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:25.289 [2024-11-06 14:32:05.238250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:28280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.289 [2024-11-06 14:32:05.238267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:25.289 [2024-11-06 14:32:05.238295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:28288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.289 [2024-11-06 14:32:05.238313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:25.289 [2024-11-06 14:32:05.238336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:28296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.289 [2024-11-06 14:32:05.238353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.289 [2024-11-06 14:32:05.238376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:28304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.289 [2024-11-06 14:32:05.238393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:25.289 [2024-11-06 14:32:05.238417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:28312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.289 [2024-11-06 14:32:05.238434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:25.289 [2024-11-06 14:32:05.238457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:28320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.289 [2024-11-06 14:32:05.238474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:25.289 [2024-11-06 14:32:05.238497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:28328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.289 [2024-11-06 14:32:05.238523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:25.289 [2024-11-06 14:32:05.238546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:28336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.289 [2024-11-06 14:32:05.238563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:25.289 [2024-11-06 14:32:05.238586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:28344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.289 [2024-11-06 14:32:05.238603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:25.289 [2024-11-06 14:32:05.238627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:27712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.289 [2024-11-06 14:32:05.238644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:25.289 [2024-11-06 14:32:05.238667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:27720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.289 [2024-11-06 14:32:05.238685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:25.289 [2024-11-06 14:32:05.238708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:27728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.289 [2024-11-06 14:32:05.238725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:25.289 [2024-11-06 14:32:05.238758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:27736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.289 [2024-11-06 14:32:05.238775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:25.290 [2024-11-06 14:32:05.238798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:27744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.290 [2024-11-06 14:32:05.238815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:25.290 [2024-11-06 14:32:05.238850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:27752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.290 [2024-11-06 14:32:05.238868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:25.290 [2024-11-06 14:32:05.238891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:27760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.290 [2024-11-06 14:32:05.238908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:25.290 [2024-11-06 14:32:05.238931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:27768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.290 [2024-11-06 14:32:05.238948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:25.290 [2024-11-06 14:32:05.238975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:27776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.290 [2024-11-06 14:32:05.238992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:25.290 [2024-11-06 14:32:05.239015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:27784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.290 [2024-11-06 14:32:05.239032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:25.290 [2024-11-06 14:32:05.239054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:27792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.290 [2024-11-06 14:32:05.239071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:25.290 [2024-11-06 14:32:05.239094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.290 [2024-11-06 14:32:05.239111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:25.290 [2024-11-06 14:32:05.239134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:27808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.290 [2024-11-06 14:32:05.239151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:25.290 [2024-11-06 14:32:05.239174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:27816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.290 [2024-11-06 14:32:05.239191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:25.290 [2024-11-06 14:32:05.239213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:27824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.290 [2024-11-06 14:32:05.239230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:25.290 [2024-11-06 14:32:05.239263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:27832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.290 [2024-11-06 14:32:05.239280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:25.290 [2024-11-06 14:32:05.239303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:27840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.290 [2024-11-06 14:32:05.239319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:25.290 [2024-11-06 14:32:05.239342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:27848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.290 [2024-11-06 14:32:05.239359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:25.290 [2024-11-06 14:32:05.239382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:27856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.290 [2024-11-06 14:32:05.239399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:25.290 [2024-11-06 14:32:05.239421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:27864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.290 [2024-11-06 14:32:05.239439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:25.290 [2024-11-06 14:32:05.239461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:27872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.290 [2024-11-06 14:32:05.239478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:25.290 [2024-11-06 14:32:05.239501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:27880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.290 [2024-11-06 14:32:05.239517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:25.290 [2024-11-06 14:32:05.239540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:27888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.290 [2024-11-06 14:32:05.239557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:25.290 [2024-11-06 14:32:05.239580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:27896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.290 [2024-11-06 14:32:05.239596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:25.290 [2024-11-06 14:32:05.239621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:27904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.290 [2024-11-06 14:32:05.239638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.290 [2024-11-06 14:32:05.239661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:27912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.290 [2024-11-06 14:32:05.239678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.290 [2024-11-06 14:32:05.239701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:27920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.290 [2024-11-06 14:32:05.239718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:25.290 8621.00 IOPS, 33.68 MiB/s [2024-11-06T14:32:52.925Z] [2024-11-06 14:32:05.242790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:27928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.290 [2024-11-06 14:32:05.242858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:25.290 [2024-11-06 14:32:05.242894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:27936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.290 [2024-11-06 14:32:05.242912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:25.290 [2024-11-06 14:32:05.242937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:27944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.290 [2024-11-06 14:32:05.242954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:25.290 [2024-11-06 14:32:05.242977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:27952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.290 [2024-11-06 14:32:05.242994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:25.290 [2024-11-06 14:32:05.243017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:27960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.290 [2024-11-06 14:32:05.243033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:25.290 [2024-11-06 14:32:05.243057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:28352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.290 [2024-11-06 14:32:05.243075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:25.290 [2024-11-06 14:32:05.243097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:28360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.290 [2024-11-06 14:32:05.243114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:25.290 [2024-11-06 14:32:05.243138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:28368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.290 [2024-11-06 14:32:05.243155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:25.290 [2024-11-06 14:32:05.243177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:28376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.290 [2024-11-06 14:32:05.243194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:25.290 [2024-11-06 14:32:05.243217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:28384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.290 [2024-11-06 14:32:05.243233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:25.290 [2024-11-06 14:32:05.243256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:28392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.290 [2024-11-06 14:32:05.243272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:25.290 [2024-11-06 14:32:05.243296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:28400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.290 [2024-11-06 14:32:05.243313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:25.290 [2024-11-06 14:32:05.243353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.290 [2024-11-06 14:32:05.243384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:25.290 [2024-11-06 14:32:05.243411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:28416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.290 [2024-11-06 14:32:05.243428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:25.290 [2024-11-06 14:32:05.243452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:28424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.290 [2024-11-06 14:32:05.243469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:25.290 8610.11 IOPS, 33.63 MiB/s [2024-11-06T14:32:52.925Z] 8618.30 IOPS, 33.67 MiB/s [2024-11-06T14:32:52.925Z] 8615.55 IOPS, 33.65 MiB/s [2024-11-06T14:32:52.925Z] 8616.92 IOPS, 33.66 MiB/s [2024-11-06T14:32:52.926Z] 8623.62 IOPS, 33.69 MiB/s [2024-11-06T14:32:52.926Z] 8649.93 IOPS, 33.79 MiB/s [2024-11-06T14:32:52.926Z] [2024-11-06 14:32:11.738933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:92184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.291 [2024-11-06 14:32:11.739007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:25.291 [2024-11-06 14:32:11.739074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:92192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.291 [2024-11-06 14:32:11.739094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:25.291 [2024-11-06 14:32:11.739120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:92200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.291 [2024-11-06 14:32:11.739137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:25.291 [2024-11-06 14:32:11.739161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:92208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.291 [2024-11-06 14:32:11.739179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:25.291 [2024-11-06 14:32:11.739202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:92216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.291 [2024-11-06 14:32:11.739219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:25.291 [2024-11-06 14:32:11.739242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:92224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.291 [2024-11-06 14:32:11.739259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:25.291 [2024-11-06 14:32:11.739283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:92232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.291 [2024-11-06 14:32:11.739300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:25.291 [2024-11-06 14:32:11.739323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:92240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.291 [2024-11-06 14:32:11.739339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:25.291 [2024-11-06 14:32:11.739363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:91800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.291 [2024-11-06 14:32:11.739379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:25.291 [2024-11-06 14:32:11.739402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:91808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.291 [2024-11-06 14:32:11.739440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:25.291 [2024-11-06 14:32:11.739463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:91816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.291 [2024-11-06 14:32:11.739480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:25.291 [2024-11-06 14:32:11.739503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:91824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.291 [2024-11-06 14:32:11.739520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:25.291 [2024-11-06 14:32:11.739542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:91832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.291 [2024-11-06 14:32:11.739558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:25.291 [2024-11-06 14:32:11.739581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:91840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.291 [2024-11-06 14:32:11.739597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:25.291 [2024-11-06 14:32:11.739620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.291 [2024-11-06 14:32:11.739636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:25.291 [2024-11-06 14:32:11.739658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:91856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.291 [2024-11-06 14:32:11.739674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:25.291 [2024-11-06 14:32:11.739697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:91864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.291 [2024-11-06 14:32:11.739713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:25.291 [2024-11-06 14:32:11.739738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:91872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.291 [2024-11-06 14:32:11.739755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:25.291 [2024-11-06 14:32:11.739778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:91880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.291 [2024-11-06 14:32:11.739794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:25.291 [2024-11-06 14:32:11.739817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:91888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.291 [2024-11-06 14:32:11.739833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:25.291 [2024-11-06 14:32:11.739870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:91896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.291 [2024-11-06 14:32:11.739887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:25.291 [2024-11-06 14:32:11.739917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:91904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.291 [2024-11-06 14:32:11.739942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:25.291 [2024-11-06 14:32:11.739970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:91912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.291 [2024-11-06 14:32:11.739988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:25.291 [2024-11-06 14:32:11.740012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:91920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.291 [2024-11-06 14:32:11.740030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:25.291 [2024-11-06 14:32:11.740057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:92248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.291 [2024-11-06 14:32:11.740074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:25.291 [2024-11-06 14:32:11.740097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:92256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.291 [2024-11-06 14:32:11.740114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:25.291 [2024-11-06 14:32:11.740137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:92264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.291 [2024-11-06 14:32:11.740154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:25.291 [2024-11-06 14:32:11.740177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:92272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.291 [2024-11-06 14:32:11.740193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:25.291 [2024-11-06 14:32:11.740216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:92280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.291 [2024-11-06 14:32:11.740232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:25.291 [2024-11-06 14:32:11.740256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:92288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.291 [2024-11-06 14:32:11.740272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:25.291 [2024-11-06 14:32:11.740296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:92296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.291 [2024-11-06 14:32:11.740313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:25.291 [2024-11-06 14:32:11.740336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:92304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.291 [2024-11-06 14:32:11.740354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.291 [2024-11-06 14:32:11.740377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:92312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.291 [2024-11-06 14:32:11.740394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:25.291 [2024-11-06 14:32:11.740418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:92320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.291 [2024-11-06 14:32:11.740440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:25.291 [2024-11-06 14:32:11.740464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:92328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.291 [2024-11-06 14:32:11.740481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:25.291 [2024-11-06 14:32:11.740505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:92336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.291 [2024-11-06 14:32:11.740521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:25.291 [2024-11-06 14:32:11.740544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:92344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.291 [2024-11-06 14:32:11.740561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:25.291 [2024-11-06 14:32:11.740585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:92352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.291 [2024-11-06 14:32:11.740601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:25.291 [2024-11-06 14:32:11.740624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:92360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.292 [2024-11-06 14:32:11.740641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:25.292 [2024-11-06 14:32:11.740665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:92368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.292 [2024-11-06 14:32:11.740681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:25.292 [2024-11-06 14:32:11.740705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:92376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.292 [2024-11-06 14:32:11.740722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:25.292 [2024-11-06 14:32:11.740745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:92384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.292 [2024-11-06 14:32:11.740762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:25.292 [2024-11-06 14:32:11.740786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:92392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.292 [2024-11-06 14:32:11.740803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:25.292 [2024-11-06 14:32:11.740826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:92400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.292 [2024-11-06 14:32:11.740853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:25.292 [2024-11-06 14:32:11.740877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:92408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.292 [2024-11-06 14:32:11.740894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:25.292 [2024-11-06 14:32:11.740917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:92416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.292 [2024-11-06 14:32:11.740934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:25.292 [2024-11-06 14:32:11.740963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:92424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.292 [2024-11-06 14:32:11.740980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:25.292 [2024-11-06 14:32:11.741012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:92432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.292 [2024-11-06 14:32:11.741030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:25.292 [2024-11-06 14:32:11.741054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:92440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.292 [2024-11-06 14:32:11.741071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:25.292 [2024-11-06 14:32:11.741112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:92448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.292 [2024-11-06 14:32:11.741129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:25.292 [2024-11-06 14:32:11.741152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:92456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.292 [2024-11-06 14:32:11.741169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:25.292 [2024-11-06 14:32:11.741193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:92464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.292 [2024-11-06 14:32:11.741209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:25.292 [2024-11-06 14:32:11.741233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:91928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.292 [2024-11-06 14:32:11.741250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:25.292 [2024-11-06 14:32:11.741273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:91936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.292 [2024-11-06 14:32:11.741290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:25.292 [2024-11-06 14:32:11.741313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:91944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.292 [2024-11-06 14:32:11.741330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:25.292 [2024-11-06 14:32:11.741353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:91952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.292 [2024-11-06 14:32:11.741369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:25.292 [2024-11-06 14:32:11.741393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:91960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.292 [2024-11-06 14:32:11.741410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:25.292 [2024-11-06 14:32:11.741433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:91968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.292 [2024-11-06 14:32:11.741449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:25.292 [2024-11-06 14:32:11.741478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:91976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.292 [2024-11-06 14:32:11.741495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:25.292 [2024-11-06 14:32:11.741517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:91984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.292 [2024-11-06 14:32:11.741534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:25.292 [2024-11-06 14:32:11.741557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:92472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.292 [2024-11-06 14:32:11.741574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:25.292 [2024-11-06 14:32:11.741597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:92480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.292 [2024-11-06 14:32:11.741614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:25.292 [2024-11-06 14:32:11.741637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:92488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.292 [2024-11-06 14:32:11.741654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.292 [2024-11-06 14:32:11.741678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:92496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.292 [2024-11-06 14:32:11.741694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.292 [2024-11-06 14:32:11.741723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:92504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.292 [2024-11-06 14:32:11.741741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:25.292 [2024-11-06 14:32:11.741765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:92512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.292 [2024-11-06 14:32:11.741781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:25.292 [2024-11-06 14:32:11.741805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:92520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.292 [2024-11-06 14:32:11.741822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:25.292 [2024-11-06 14:32:11.741856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:92528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.292 [2024-11-06 14:32:11.741874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:25.292 [2024-11-06 14:32:11.741897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:92536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.292 [2024-11-06 14:32:11.741913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:25.292 [2024-11-06 14:32:11.741937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:92544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.292 [2024-11-06 14:32:11.741953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:25.292 [2024-11-06 14:32:11.741976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:92552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.293 [2024-11-06 14:32:11.741998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:25.293 [2024-11-06 14:32:11.742022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:92560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.293 [2024-11-06 14:32:11.742038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:25.293 [2024-11-06 14:32:11.742061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:91992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.293 [2024-11-06 14:32:11.742078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:25.293 [2024-11-06 14:32:11.742101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:92000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.293 [2024-11-06 14:32:11.742118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:25.293 [2024-11-06 14:32:11.742141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:92008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.293 [2024-11-06 14:32:11.742158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:25.293 [2024-11-06 14:32:11.742181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:92016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.293 [2024-11-06 14:32:11.742197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:25.293 [2024-11-06 14:32:11.742221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:92024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.293 [2024-11-06 14:32:11.742237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:25.293 [2024-11-06 14:32:11.742260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:92032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.293 [2024-11-06 14:32:11.742277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:25.293 [2024-11-06 14:32:11.742299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:92040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.293 [2024-11-06 14:32:11.742317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:25.293 [2024-11-06 14:32:11.742339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:92048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.293 [2024-11-06 14:32:11.742357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:25.293 [2024-11-06 14:32:11.742379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:92568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.293 [2024-11-06 14:32:11.742397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:25.293 [2024-11-06 14:32:11.742420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:92576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.293 [2024-11-06 14:32:11.742437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:25.293 [2024-11-06 14:32:11.742460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:92584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.293 [2024-11-06 14:32:11.742482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:25.293 [2024-11-06 14:32:11.742527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:92592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.293 [2024-11-06 14:32:11.742545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:25.293 [2024-11-06 14:32:11.742573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:92600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.293 [2024-11-06 14:32:11.742590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:25.293 [2024-11-06 14:32:11.742613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:92608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.293 [2024-11-06 14:32:11.742629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:25.293 [2024-11-06 14:32:11.742653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:92616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.293 [2024-11-06 14:32:11.742669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:25.293 [2024-11-06 14:32:11.742692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:92624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.293 [2024-11-06 14:32:11.742709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:25.293 [2024-11-06 14:32:11.742732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:92632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.293 [2024-11-06 14:32:11.742748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:25.293 [2024-11-06 14:32:11.742771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:92640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.293 [2024-11-06 14:32:11.742788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:25.293 [2024-11-06 14:32:11.742811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:92648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.293 [2024-11-06 14:32:11.742828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:25.293 [2024-11-06 14:32:11.742861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:92656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.293 [2024-11-06 14:32:11.742884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:25.293 [2024-11-06 14:32:11.742907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:92664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.293 [2024-11-06 14:32:11.742924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:25.293 [2024-11-06 14:32:11.742947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:92672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.293 [2024-11-06 14:32:11.742964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:25.293 [2024-11-06 14:32:11.742986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:92680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.293 [2024-11-06 14:32:11.743002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:25.293 [2024-11-06 14:32:11.743031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:92688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.293 [2024-11-06 14:32:11.743048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.293 [2024-11-06 14:32:11.743072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:92056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.293 [2024-11-06 14:32:11.743088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:25.293 [2024-11-06 14:32:11.743111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:92064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.293 [2024-11-06 14:32:11.743128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:25.293 [2024-11-06 14:32:11.743151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:92072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.293 [2024-11-06 14:32:11.743168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:25.293 [2024-11-06 14:32:11.743191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:92080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.293 [2024-11-06 14:32:11.743208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:25.293 [2024-11-06 14:32:11.743231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:92088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.293 [2024-11-06 14:32:11.743248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:25.293 [2024-11-06 14:32:11.743271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:92096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.293 [2024-11-06 14:32:11.743287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:25.293 [2024-11-06 14:32:11.743310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:92104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.293 [2024-11-06 14:32:11.743327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:25.293 [2024-11-06 14:32:11.743350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:92112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.293 [2024-11-06 14:32:11.743366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:25.293 [2024-11-06 14:32:11.743389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:92120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.293 [2024-11-06 14:32:11.743406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:25.293 [2024-11-06 14:32:11.743428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:92128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.293 [2024-11-06 14:32:11.743445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:25.293 [2024-11-06 14:32:11.743468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:92136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.293 [2024-11-06 14:32:11.743485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:25.293 [2024-11-06 14:32:11.743513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:92144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.293 [2024-11-06 14:32:11.743532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:25.293 [2024-11-06 14:32:11.743555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:92152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.293 [2024-11-06 14:32:11.743572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:25.293 [2024-11-06 14:32:11.743596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:92160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.294 [2024-11-06 14:32:11.743613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:25.294 [2024-11-06 14:32:11.743636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:92168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.294 [2024-11-06 14:32:11.743653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:25.294 [2024-11-06 14:32:11.744414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:92176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.294 [2024-11-06 14:32:11.744451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:25.294 [2024-11-06 14:32:11.744487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:92696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.294 [2024-11-06 14:32:11.744504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:25.294 [2024-11-06 14:32:11.744547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:92704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.294 [2024-11-06 14:32:11.744564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:25.294 [2024-11-06 14:32:11.744593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:92712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.294 [2024-11-06 14:32:11.744610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:25.294 [2024-11-06 14:32:11.744640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:92720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.294 [2024-11-06 14:32:11.744657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:25.294 [2024-11-06 14:32:11.744689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:92728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.294 [2024-11-06 14:32:11.744706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:25.294 [2024-11-06 14:32:11.744735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:92736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.294 [2024-11-06 14:32:11.744751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:25.294 [2024-11-06 14:32:11.744781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:92744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.294 [2024-11-06 14:32:11.744798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:25.294 [2024-11-06 14:32:11.744854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:92752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.294 [2024-11-06 14:32:11.744884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:25.294 [2024-11-06 14:32:11.744914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:92760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.294 [2024-11-06 14:32:11.744931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:25.294 [2024-11-06 14:32:11.744960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:92768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.294 [2024-11-06 14:32:11.744977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:25.294 [2024-11-06 14:32:11.745006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:92776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.294 [2024-11-06 14:32:11.745022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:25.294 [2024-11-06 14:32:11.745051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:92784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.294 [2024-11-06 14:32:11.745071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:25.294 [2024-11-06 14:32:11.745100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:92792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.294 [2024-11-06 14:32:11.745116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:25.294 [2024-11-06 14:32:11.745145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:92800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.294 [2024-11-06 14:32:11.745162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:25.294 [2024-11-06 14:32:11.745191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:92808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.294 [2024-11-06 14:32:11.745207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:25.294 [2024-11-06 14:32:11.745236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:92816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.294 [2024-11-06 14:32:11.745253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.294 8410.87 IOPS, 32.85 MiB/s [2024-11-06T14:32:52.929Z] 8178.44 IOPS, 31.95 MiB/s [2024-11-06T14:32:52.929Z] 8270.53 IOPS, 32.31 MiB/s [2024-11-06T14:32:52.929Z] 8351.50 IOPS, 32.62 MiB/s [2024-11-06T14:32:52.929Z] 8423.95 IOPS, 32.91 MiB/s [2024-11-06T14:32:52.929Z] 8491.00 IOPS, 33.17 MiB/s [2024-11-06T14:32:52.929Z] 8546.38 IOPS, 33.38 MiB/s [2024-11-06T14:32:52.929Z] [2024-11-06 14:32:18.654201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:27664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.294 [2024-11-06 14:32:18.654281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:25.294 [2024-11-06 14:32:18.654366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:27672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.294 [2024-11-06 14:32:18.654389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:25.294 [2024-11-06 14:32:18.654415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:27680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.294 [2024-11-06 14:32:18.654431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:25.294 [2024-11-06 14:32:18.654471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:27688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.294 [2024-11-06 14:32:18.654488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:25.294 [2024-11-06 14:32:18.654510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:27696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.294 [2024-11-06 14:32:18.654537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:25.294 [2024-11-06 14:32:18.654562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:27704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.294 [2024-11-06 14:32:18.654578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:25.294 [2024-11-06 14:32:18.654601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:27712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.294 [2024-11-06 14:32:18.654618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:25.294 [2024-11-06 14:32:18.654641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:27720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.294 [2024-11-06 14:32:18.654658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:25.294 [2024-11-06 14:32:18.654681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:27216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.294 [2024-11-06 14:32:18.654698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:25.294 [2024-11-06 14:32:18.654721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:27224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.294 [2024-11-06 14:32:18.654738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:25.294 [2024-11-06 14:32:18.654761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:27232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.294 [2024-11-06 14:32:18.654778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:25.294 [2024-11-06 14:32:18.654800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:27240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.294 [2024-11-06 14:32:18.654817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:25.294 [2024-11-06 14:32:18.654856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:27248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.294 [2024-11-06 14:32:18.654874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:25.294 [2024-11-06 14:32:18.654898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:27256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.294 [2024-11-06 14:32:18.654915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:25.294 [2024-11-06 14:32:18.654938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:27264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.294 [2024-11-06 14:32:18.654954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:25.294 [2024-11-06 14:32:18.654984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:27272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.294 [2024-11-06 14:32:18.655002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:25.294 [2024-11-06 14:32:18.655024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:27280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.294 [2024-11-06 14:32:18.655041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:25.294 [2024-11-06 14:32:18.655067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:27288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.294 [2024-11-06 14:32:18.655084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:25.294 [2024-11-06 14:32:18.655107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:27296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.294 [2024-11-06 14:32:18.655124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:25.294 [2024-11-06 14:32:18.655147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:27304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.295 [2024-11-06 14:32:18.655164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:25.295 [2024-11-06 14:32:18.655187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:27312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.295 [2024-11-06 14:32:18.655204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:25.295 [2024-11-06 14:32:18.655227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:27320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.295 [2024-11-06 14:32:18.655244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:25.295 [2024-11-06 14:32:18.655267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:27328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.295 [2024-11-06 14:32:18.655283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:25.295 [2024-11-06 14:32:18.655306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.295 [2024-11-06 14:32:18.655323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:25.295 [2024-11-06 14:32:18.655346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.295 [2024-11-06 14:32:18.655363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:25.295 [2024-11-06 14:32:18.655386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:27352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.295 [2024-11-06 14:32:18.655403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.295 [2024-11-06 14:32:18.655426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:27360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.295 [2024-11-06 14:32:18.655443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:25.295 [2024-11-06 14:32:18.655466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:27368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.295 [2024-11-06 14:32:18.655489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:25.295 [2024-11-06 14:32:18.655512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:27376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.295 [2024-11-06 14:32:18.655529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:25.295 [2024-11-06 14:32:18.655552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:27384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.295 [2024-11-06 14:32:18.655569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:25.295 [2024-11-06 14:32:18.655592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:27392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.295 [2024-11-06 14:32:18.655609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:25.295 [2024-11-06 14:32:18.655648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:27400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.295 [2024-11-06 14:32:18.655666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:25.295 [2024-11-06 14:32:18.655695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:27728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.295 [2024-11-06 14:32:18.655713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:25.295 [2024-11-06 14:32:18.655737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.295 [2024-11-06 14:32:18.655754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:25.295 [2024-11-06 14:32:18.655777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:27744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.295 [2024-11-06 14:32:18.655794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:25.295 [2024-11-06 14:32:18.655817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:27752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.295 [2024-11-06 14:32:18.655845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:25.295 [2024-11-06 14:32:18.655870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:27760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.295 [2024-11-06 14:32:18.655886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:25.295 [2024-11-06 14:32:18.655910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.295 [2024-11-06 14:32:18.655926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:25.295 [2024-11-06 14:32:18.655950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:27776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.295 [2024-11-06 14:32:18.655967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:25.295 [2024-11-06 14:32:18.655991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.295 [2024-11-06 14:32:18.656014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:25.295 [2024-11-06 14:32:18.656037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:27408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.295 [2024-11-06 14:32:18.656054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:25.295 [2024-11-06 14:32:18.656077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:27416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.295 [2024-11-06 14:32:18.656094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:25.295 [2024-11-06 14:32:18.656118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:27424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.295 [2024-11-06 14:32:18.656134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:25.295 [2024-11-06 14:32:18.656157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:27432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.295 [2024-11-06 14:32:18.656173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:25.295 [2024-11-06 14:32:18.656196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:27440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.295 [2024-11-06 14:32:18.656212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:25.295 [2024-11-06 14:32:18.656235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:27448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.295 [2024-11-06 14:32:18.656252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:25.295 [2024-11-06 14:32:18.656274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:27456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.295 [2024-11-06 14:32:18.656292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:25.295 [2024-11-06 14:32:18.656315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:27464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.295 [2024-11-06 14:32:18.656332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:25.295 [2024-11-06 14:32:18.656357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:27472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.295 [2024-11-06 14:32:18.656374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:25.295 [2024-11-06 14:32:18.656414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:27480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.295 [2024-11-06 14:32:18.656430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:25.295 [2024-11-06 14:32:18.656454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:27488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.295 [2024-11-06 14:32:18.656470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:25.295 [2024-11-06 14:32:18.656494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:27496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.295 [2024-11-06 14:32:18.656511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:25.295 [2024-11-06 14:32:18.656540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:27504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.295 [2024-11-06 14:32:18.656558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:25.295 [2024-11-06 14:32:18.656581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:27512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.295 [2024-11-06 14:32:18.656598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:25.295 [2024-11-06 14:32:18.656621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:27520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.295 [2024-11-06 14:32:18.656638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:25.295 [2024-11-06 14:32:18.656661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:27528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.295 [2024-11-06 14:32:18.656678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:25.295 [2024-11-06 14:32:18.656701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:27792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.295 [2024-11-06 14:32:18.656717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:25.295 [2024-11-06 14:32:18.656740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:27800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.295 [2024-11-06 14:32:18.656757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.295 [2024-11-06 14:32:18.656781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:27808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.295 [2024-11-06 14:32:18.656798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:25.296 [2024-11-06 14:32:18.656821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:27816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.296 [2024-11-06 14:32:18.656848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:25.296 [2024-11-06 14:32:18.656873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:27824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.296 [2024-11-06 14:32:18.656890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:25.296 [2024-11-06 14:32:18.656913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:27832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.296 [2024-11-06 14:32:18.656930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:25.296 [2024-11-06 14:32:18.656953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:27840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.296 [2024-11-06 14:32:18.656970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:25.296 [2024-11-06 14:32:18.656992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:27848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.296 [2024-11-06 14:32:18.657009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:25.296 [2024-11-06 14:32:18.657039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:27856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.296 [2024-11-06 14:32:18.657058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:25.296 [2024-11-06 14:32:18.657082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:27864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.296 [2024-11-06 14:32:18.657098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:25.296 [2024-11-06 14:32:18.657120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:27872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.296 [2024-11-06 14:32:18.657138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:25.296 [2024-11-06 14:32:18.657160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:27880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.296 [2024-11-06 14:32:18.657177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:25.296 [2024-11-06 14:32:18.657204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:27888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.296 [2024-11-06 14:32:18.657221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:25.296 [2024-11-06 14:32:18.657244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:27896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.296 [2024-11-06 14:32:18.657261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:25.296 [2024-11-06 14:32:18.657284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.296 [2024-11-06 14:32:18.657301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:25.296 [2024-11-06 14:32:18.657324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:27912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.296 [2024-11-06 14:32:18.657341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:25.296 [2024-11-06 14:32:18.657363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:27920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.296 [2024-11-06 14:32:18.657380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:25.296 [2024-11-06 14:32:18.657403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:27928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.296 [2024-11-06 14:32:18.657419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:25.296 [2024-11-06 14:32:18.657442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:27936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.296 [2024-11-06 14:32:18.657459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:25.296 [2024-11-06 14:32:18.657482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:27944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.296 [2024-11-06 14:32:18.657499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:25.296 [2024-11-06 14:32:18.657523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.296 [2024-11-06 14:32:18.657544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:25.296 [2024-11-06 14:32:18.657569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:27536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.296 [2024-11-06 14:32:18.657585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:25.296 [2024-11-06 14:32:18.657608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:27544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.296 [2024-11-06 14:32:18.657625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:25.296 [2024-11-06 14:32:18.657648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:27552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.296 [2024-11-06 14:32:18.657665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:25.296 [2024-11-06 14:32:18.657719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:27560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.296 [2024-11-06 14:32:18.657737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:25.296 [2024-11-06 14:32:18.657762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:27568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.296 [2024-11-06 14:32:18.657779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:25.296 [2024-11-06 14:32:18.657801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:27576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.296 [2024-11-06 14:32:18.657818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:25.296 [2024-11-06 14:32:18.657851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:27584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.296 [2024-11-06 14:32:18.657869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:25.296 [2024-11-06 14:32:18.657895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.296 [2024-11-06 14:32:18.657911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:25.296 [2024-11-06 14:32:18.657936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:27960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.296 [2024-11-06 14:32:18.657953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:25.296 [2024-11-06 14:32:18.657976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:27968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.296 [2024-11-06 14:32:18.657993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:25.296 [2024-11-06 14:32:18.658016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:27976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.296 [2024-11-06 14:32:18.658033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:25.296 [2024-11-06 14:32:18.658083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:27984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.296 [2024-11-06 14:32:18.658110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:25.296 [2024-11-06 14:32:18.658134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:27992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.296 [2024-11-06 14:32:18.658151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.296 [2024-11-06 14:32:18.658174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:28000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.296 [2024-11-06 14:32:18.658191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:25.296 [2024-11-06 14:32:18.658214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:28008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.296 [2024-11-06 14:32:18.658231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:25.296 [2024-11-06 14:32:18.658254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:28016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.296 [2024-11-06 14:32:18.658270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:25.297 [2024-11-06 14:32:18.658293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:28024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.297 [2024-11-06 14:32:18.658310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:25.297 [2024-11-06 14:32:18.658333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.297 [2024-11-06 14:32:18.658349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:25.297 [2024-11-06 14:32:18.658372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:28040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.297 [2024-11-06 14:32:18.658389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:25.297 [2024-11-06 14:32:18.658413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:28048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.297 [2024-11-06 14:32:18.658430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:25.297 [2024-11-06 14:32:18.658453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:28056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.297 [2024-11-06 14:32:18.658470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:25.297 [2024-11-06 14:32:18.658493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:28064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.297 [2024-11-06 14:32:18.658510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:25.297 [2024-11-06 14:32:18.658542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:28072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.297 [2024-11-06 14:32:18.658560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:25.297 [2024-11-06 14:32:18.658585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:28080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.297 [2024-11-06 14:32:18.658602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:25.297 [2024-11-06 14:32:18.658632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:27600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.297 [2024-11-06 14:32:18.658649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:25.297 [2024-11-06 14:32:18.658672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:27608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.297 [2024-11-06 14:32:18.658689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:25.297 [2024-11-06 14:32:18.658712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:27616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.297 [2024-11-06 14:32:18.658729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:25.297 [2024-11-06 14:32:18.658752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:27624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.297 [2024-11-06 14:32:18.658769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:25.297 [2024-11-06 14:32:18.658792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:27632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.297 [2024-11-06 14:32:18.658809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:25.297 [2024-11-06 14:32:18.658832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:27640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.297 [2024-11-06 14:32:18.658860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:25.297 [2024-11-06 14:32:18.658888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:27648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.297 [2024-11-06 14:32:18.658905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:25.297 [2024-11-06 14:32:18.659557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:27656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.297 [2024-11-06 14:32:18.659589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:25.297 [2024-11-06 14:32:18.659625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:28088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.297 [2024-11-06 14:32:18.659643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:25.297 [2024-11-06 14:32:18.659673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:28096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.297 [2024-11-06 14:32:18.659690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:25.297 [2024-11-06 14:32:18.659718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:28104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.297 [2024-11-06 14:32:18.659735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:25.297 [2024-11-06 14:32:18.659764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:28112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.297 [2024-11-06 14:32:18.659782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:25.297 [2024-11-06 14:32:18.659848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:28120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.297 [2024-11-06 14:32:18.659867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:25.297 [2024-11-06 14:32:18.659896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:28128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.297 [2024-11-06 14:32:18.659913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:25.297 [2024-11-06 14:32:18.659942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:28136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.297 [2024-11-06 14:32:18.659959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:25.297 [2024-11-06 14:32:18.660004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:28144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.297 [2024-11-06 14:32:18.660023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:25.297 [2024-11-06 14:32:18.660052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:28152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.297 [2024-11-06 14:32:18.660068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:25.297 [2024-11-06 14:32:18.660097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:28160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.297 [2024-11-06 14:32:18.660114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:25.297 [2024-11-06 14:32:18.660142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:28168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.297 [2024-11-06 14:32:18.660159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:25.297 [2024-11-06 14:32:18.660188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:28176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.297 [2024-11-06 14:32:18.660204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.297 [2024-11-06 14:32:18.660233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.297 [2024-11-06 14:32:18.660250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.297 [2024-11-06 14:32:18.660278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:28192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.297 [2024-11-06 14:32:18.660295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:25.297 [2024-11-06 14:32:18.660325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:28200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.297 [2024-11-06 14:32:18.660342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:25.297 [2024-11-06 14:32:18.660374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:28208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.297 [2024-11-06 14:32:18.660391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:25.297 [2024-11-06 14:32:18.660420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:28216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.297 [2024-11-06 14:32:18.660444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:25.297 [2024-11-06 14:32:18.660473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:28224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.297 [2024-11-06 14:32:18.660490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:25.297 [2024-11-06 14:32:18.660518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.297 [2024-11-06 14:32:18.660535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:25.297 8346.64 IOPS, 32.60 MiB/s [2024-11-06T14:32:52.932Z] 7983.74 IOPS, 31.19 MiB/s [2024-11-06T14:32:52.932Z] 7651.08 IOPS, 29.89 MiB/s [2024-11-06T14:32:52.932Z] 7345.04 IOPS, 28.69 MiB/s [2024-11-06T14:32:52.932Z] 7062.54 IOPS, 27.59 MiB/s [2024-11-06T14:32:52.932Z] 6800.96 IOPS, 26.57 MiB/s [2024-11-06T14:32:52.932Z] 6558.07 IOPS, 25.62 MiB/s [2024-11-06T14:32:52.932Z] 6510.93 IOPS, 25.43 MiB/s [2024-11-06T14:32:52.932Z] 6613.90 IOPS, 25.84 MiB/s [2024-11-06T14:32:52.932Z] 6708.61 IOPS, 26.21 MiB/s [2024-11-06T14:32:52.932Z] 6801.28 IOPS, 26.57 MiB/s [2024-11-06T14:32:52.932Z] 6886.09 IOPS, 26.90 MiB/s [2024-11-06T14:32:52.932Z] 6965.91 IOPS, 27.21 MiB/s [2024-11-06T14:32:52.932Z] [2024-11-06 14:32:31.757526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.297 [2024-11-06 14:32:31.757603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:25.297 [2024-11-06 14:32:31.757673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:102544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.297 [2024-11-06 14:32:31.757693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:25.298 [2024-11-06 14:32:31.757719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:102552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.298 [2024-11-06 14:32:31.757737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:25.298 [2024-11-06 14:32:31.757762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.298 [2024-11-06 14:32:31.757778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.298 [2024-11-06 14:32:31.757803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:102568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.298 [2024-11-06 14:32:31.757819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:25.298 [2024-11-06 14:32:31.757855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:102576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.298 [2024-11-06 14:32:31.757872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:25.298 [2024-11-06 14:32:31.757897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:102584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.298 [2024-11-06 14:32:31.757914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:25.298 [2024-11-06 14:32:31.757937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:102592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.298 [2024-11-06 14:32:31.757954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:25.298 [2024-11-06 14:32:31.757996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:102024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.298 [2024-11-06 14:32:31.758013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:25.298 [2024-11-06 14:32:31.758036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.298 [2024-11-06 14:32:31.758052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:25.298 [2024-11-06 14:32:31.758076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:102040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.298 [2024-11-06 14:32:31.758092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:25.298 [2024-11-06 14:32:31.758116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:102048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.298 [2024-11-06 14:32:31.758132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:25.298 [2024-11-06 14:32:31.758155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:102056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.298 [2024-11-06 14:32:31.758172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:25.298 [2024-11-06 14:32:31.758195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:102064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.298 [2024-11-06 14:32:31.758211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:25.298 [2024-11-06 14:32:31.758233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:102072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.298 [2024-11-06 14:32:31.758249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:25.298 [2024-11-06 14:32:31.758272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:102080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.298 [2024-11-06 14:32:31.758288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:25.298 [2024-11-06 14:32:31.758310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:102088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.298 [2024-11-06 14:32:31.758326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:25.298 [2024-11-06 14:32:31.758352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:102096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.298 [2024-11-06 14:32:31.758368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:25.298 [2024-11-06 14:32:31.758391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:102104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.298 [2024-11-06 14:32:31.758408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:25.298 [2024-11-06 14:32:31.758431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:102112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.298 [2024-11-06 14:32:31.758448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:25.298 [2024-11-06 14:32:31.758471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:102120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.298 [2024-11-06 14:32:31.758494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:25.298 [2024-11-06 14:32:31.758517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:102128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.298 [2024-11-06 14:32:31.758543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:25.298 [2024-11-06 14:32:31.758566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.298 [2024-11-06 14:32:31.758583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:25.298 [2024-11-06 14:32:31.758607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:102144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.298 [2024-11-06 14:32:31.758624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:25.298 [2024-11-06 14:32:31.758680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:101992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.298 [2024-11-06 14:32:31.758700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.298 [2024-11-06 14:32:31.758720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:102000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.298 [2024-11-06 14:32:31.758736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.298 [2024-11-06 14:32:31.758753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:102008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.298 [2024-11-06 14:32:31.758770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.298 [2024-11-06 14:32:31.758787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:102016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.298 [2024-11-06 14:32:31.758804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.298 [2024-11-06 14:32:31.758821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:102600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.298 [2024-11-06 14:32:31.758848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.298 [2024-11-06 14:32:31.758865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:102608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.298 [2024-11-06 14:32:31.758882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.298 [2024-11-06 14:32:31.758918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:102616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.298 [2024-11-06 14:32:31.758939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.298 [2024-11-06 14:32:31.758957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:102624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.298 [2024-11-06 14:32:31.758973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.298 [2024-11-06 14:32:31.758992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:102632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.298 [2024-11-06 14:32:31.759008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.298 [2024-11-06 14:32:31.759038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:102640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.298 [2024-11-06 14:32:31.759055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.298 [2024-11-06 14:32:31.759073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:102648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.298 [2024-11-06 14:32:31.759089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.298 [2024-11-06 14:32:31.759106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:102656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.298 [2024-11-06 14:32:31.759122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.298 [2024-11-06 14:32:31.759140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:102664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.298 [2024-11-06 14:32:31.759155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.298 [2024-11-06 14:32:31.759172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:102672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.298 [2024-11-06 14:32:31.759188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.298 [2024-11-06 14:32:31.759206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:102680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.298 [2024-11-06 14:32:31.759222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.298 [2024-11-06 14:32:31.759239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:102688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.298 [2024-11-06 14:32:31.759255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.298 [2024-11-06 14:32:31.759271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:102696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.298 [2024-11-06 14:32:31.759287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.298 [2024-11-06 14:32:31.759304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:102704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.299 [2024-11-06 14:32:31.759321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.299 [2024-11-06 14:32:31.759337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:102712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.299 [2024-11-06 14:32:31.759354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.299 [2024-11-06 14:32:31.759370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:102720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.299 [2024-11-06 14:32:31.759386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.299 [2024-11-06 14:32:31.759403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:102152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.299 [2024-11-06 14:32:31.759419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.299 [2024-11-06 14:32:31.759436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:102160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.299 [2024-11-06 14:32:31.759458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.299 [2024-11-06 14:32:31.759475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:102168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.299 [2024-11-06 14:32:31.759491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.299 [2024-11-06 14:32:31.759508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:102176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.299 [2024-11-06 14:32:31.759524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.299 [2024-11-06 14:32:31.759542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:102184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.299 [2024-11-06 14:32:31.759558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.299 [2024-11-06 14:32:31.759590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:102192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.299 [2024-11-06 14:32:31.759606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.299 [2024-11-06 14:32:31.759623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:102200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.299 [2024-11-06 14:32:31.759639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.299 [2024-11-06 14:32:31.759656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:102208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.299 [2024-11-06 14:32:31.759672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.299 [2024-11-06 14:32:31.759689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:102216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.299 [2024-11-06 14:32:31.759705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.299 [2024-11-06 14:32:31.759722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:102224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.299 [2024-11-06 14:32:31.759738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.299 [2024-11-06 14:32:31.759755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:102232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.299 [2024-11-06 14:32:31.759771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.299 [2024-11-06 14:32:31.759788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:102240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.299 [2024-11-06 14:32:31.759804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.299 [2024-11-06 14:32:31.759821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:102248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.299 [2024-11-06 14:32:31.759847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.299 [2024-11-06 14:32:31.759864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:102256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.299 [2024-11-06 14:32:31.759880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.299 [2024-11-06 14:32:31.759902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:102264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.299 [2024-11-06 14:32:31.759919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.299 [2024-11-06 14:32:31.759936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:102272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.299 [2024-11-06 14:32:31.759952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.299 [2024-11-06 14:32:31.759969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:102728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.299 [2024-11-06 14:32:31.759985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.299 [2024-11-06 14:32:31.760002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:102736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.299 [2024-11-06 14:32:31.760019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.299 [2024-11-06 14:32:31.760036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:102744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.299 [2024-11-06 14:32:31.760052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.299 [2024-11-06 14:32:31.760069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:102752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.299 [2024-11-06 14:32:31.760085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.299 [2024-11-06 14:32:31.760102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:102760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.299 [2024-11-06 14:32:31.760118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.299 [2024-11-06 14:32:31.760136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:102768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.299 [2024-11-06 14:32:31.760152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.299 [2024-11-06 14:32:31.760169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:102776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.299 [2024-11-06 14:32:31.760184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.299 [2024-11-06 14:32:31.760202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:102784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.299 [2024-11-06 14:32:31.760217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.299 [2024-11-06 14:32:31.760234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:102280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.299 [2024-11-06 14:32:31.760250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.299 [2024-11-06 14:32:31.760267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:102288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.299 [2024-11-06 14:32:31.760283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.299 [2024-11-06 14:32:31.760300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:102296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.299 [2024-11-06 14:32:31.760320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.299 [2024-11-06 14:32:31.760338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:102304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.299 [2024-11-06 14:32:31.760354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.299 [2024-11-06 14:32:31.760371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:102312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.299 [2024-11-06 14:32:31.760387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.299 [2024-11-06 14:32:31.760404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:102320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.299 [2024-11-06 14:32:31.760420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.299 [2024-11-06 14:32:31.760437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:102328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.299 [2024-11-06 14:32:31.760452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.299 [2024-11-06 14:32:31.760469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.299 [2024-11-06 14:32:31.760485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.299 [2024-11-06 14:32:31.760502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:102344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.299 [2024-11-06 14:32:31.760517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.299 [2024-11-06 14:32:31.760534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:102352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.299 [2024-11-06 14:32:31.760550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.299 [2024-11-06 14:32:31.760567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:102360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.299 [2024-11-06 14:32:31.760582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.299 [2024-11-06 14:32:31.760600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:102368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.299 [2024-11-06 14:32:31.760615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.299 [2024-11-06 14:32:31.760633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:102376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.299 [2024-11-06 14:32:31.760649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.299 [2024-11-06 14:32:31.760667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:102384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.300 [2024-11-06 14:32:31.760683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.300 [2024-11-06 14:32:31.760700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:102392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.300 [2024-11-06 14:32:31.760716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.300 [2024-11-06 14:32:31.760737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:102400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.300 [2024-11-06 14:32:31.760754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.300 [2024-11-06 14:32:31.760776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:102792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.300 [2024-11-06 14:32:31.760792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.300 [2024-11-06 14:32:31.760809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:102800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.300 [2024-11-06 14:32:31.760825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.300 [2024-11-06 14:32:31.760854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:102808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.300 [2024-11-06 14:32:31.760870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.300 [2024-11-06 14:32:31.760888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:102816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.300 [2024-11-06 14:32:31.760904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.300 [2024-11-06 14:32:31.760921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:102824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.300 [2024-11-06 14:32:31.760937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.300 [2024-11-06 14:32:31.760954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:102832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.300 [2024-11-06 14:32:31.760970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.300 [2024-11-06 14:32:31.760987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:102840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.300 [2024-11-06 14:32:31.761003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.300 [2024-11-06 14:32:31.761020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:102848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.300 [2024-11-06 14:32:31.761035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.300 [2024-11-06 14:32:31.761053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.300 [2024-11-06 14:32:31.761068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.300 [2024-11-06 14:32:31.761085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:102416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.300 [2024-11-06 14:32:31.761101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.300 [2024-11-06 14:32:31.761118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:102424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.300 [2024-11-06 14:32:31.761134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.300 [2024-11-06 14:32:31.761151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:102432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.300 [2024-11-06 14:32:31.761171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.300 [2024-11-06 14:32:31.761189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.300 [2024-11-06 14:32:31.761205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.300 [2024-11-06 14:32:31.761222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:102448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.300 [2024-11-06 14:32:31.761238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.300 [2024-11-06 14:32:31.761255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:102456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.300 [2024-11-06 14:32:31.761271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.300 [2024-11-06 14:32:31.761288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:102464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.300 [2024-11-06 14:32:31.761304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.300 [2024-11-06 14:32:31.761322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:102472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.300 [2024-11-06 14:32:31.761338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.300 [2024-11-06 14:32:31.761356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:102480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.300 [2024-11-06 14:32:31.761372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.300 [2024-11-06 14:32:31.761389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:102488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.300 [2024-11-06 14:32:31.761405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.300 [2024-11-06 14:32:31.761422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:102496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.300 [2024-11-06 14:32:31.761438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.300 [2024-11-06 14:32:31.761455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:102504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.300 [2024-11-06 14:32:31.761471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.300 [2024-11-06 14:32:31.761488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:102512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.300 [2024-11-06 14:32:31.761503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.300 [2024-11-06 14:32:31.761520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:102520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.300 [2024-11-06 14:32:31.761536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.300 [2024-11-06 14:32:31.761552] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002bf00 is same with the state(6) to be set 00:28:25.300 [2024-11-06 14:32:31.761580] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:25.300 [2024-11-06 14:32:31.761593] nvme_qpair.c: 558:nvme_qpair_manual_complete_re 14:32:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:25.300 quest: *NOTICE*: Command completed manually: 00:28:25.300 [2024-11-06 14:32:31.761612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:102528 len:8 PRP1 0x0 PRP2 0x0 00:28:25.300 [2024-11-06 14:32:31.761634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.300 [2024-11-06 14:32:31.761652] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:25.300 [2024-11-06 14:32:31.761664] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:25.300 [2024-11-06 14:32:31.761677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102856 len:8 PRP1 0x0 PRP2 0x0 00:28:25.300 [2024-11-06 14:32:31.761692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.300 [2024-11-06 14:32:31.761707] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:25.300 [2024-11-06 14:32:31.761723] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:25.300 [2024-11-06 14:32:31.761736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102864 len:8 PRP1 0x0 PRP2 0x0 00:28:25.300 [2024-11-06 14:32:31.761762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.300 [2024-11-06 14:32:31.761777] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:25.300 [2024-11-06 14:32:31.761789] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:25.300 [2024-11-06 14:32:31.761802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102872 len:8 PRP1 0x0 PRP2 0x0 00:28:25.300 [2024-11-06 14:32:31.761817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.300 [2024-11-06 14:32:31.761832] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:25.300 [2024-11-06 14:32:31.761855] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:25.300 [2024-11-06 14:32:31.761868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102880 len:8 PRP1 0x0 PRP2 0x0 00:28:25.300 [2024-11-06 14:32:31.761883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.300 [2024-11-06 14:32:31.761898] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:25.300 [2024-11-06 14:32:31.761910] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:25.300 [2024-11-06 14:32:31.761923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102888 len:8 PRP1 0x0 PRP2 0x0 00:28:25.300 [2024-11-06 14:32:31.761938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.300 [2024-11-06 14:32:31.761954] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:25.300 [2024-11-06 14:32:31.761965] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:25.300 [2024-11-06 14:32:31.761978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102896 len:8 PRP1 0x0 PRP2 0x0 00:28:25.300 [2024-11-06 14:32:31.761993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.300 [2024-11-06 14:32:31.762009] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:25.300 [2024-11-06 14:32:31.762021] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:25.300 [2024-11-06 14:32:31.762033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102904 len:8 PRP1 0x0 PRP2 0x0 00:28:25.301 [2024-11-06 14:32:31.762048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.301 [2024-11-06 14:32:31.762063] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:25.301 [2024-11-06 14:32:31.762080] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:25.301 [2024-11-06 14:32:31.762093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102912 len:8 PRP1 0x0 PRP2 0x0 00:28:25.301 [2024-11-06 14:32:31.762108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.301 [2024-11-06 14:32:31.762124] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:25.301 [2024-11-06 14:32:31.762135] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:25.301 [2024-11-06 14:32:31.762147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102920 len:8 PRP1 0x0 PRP2 0x0 00:28:25.301 [2024-11-06 14:32:31.762163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.301 [2024-11-06 14:32:31.762179] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:25.301 [2024-11-06 14:32:31.762191] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:25.301 [2024-11-06 14:32:31.762204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102928 len:8 PRP1 0x0 PRP2 0x0 00:28:25.301 [2024-11-06 14:32:31.762219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.301 [2024-11-06 14:32:31.762234] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:25.301 [2024-11-06 14:32:31.762246] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:25.301 [2024-11-06 14:32:31.762258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102936 len:8 PRP1 0x0 PRP2 0x0 00:28:25.301 [2024-11-06 14:32:31.762273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.301 [2024-11-06 14:32:31.762288] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:25.301 [2024-11-06 14:32:31.762301] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:25.301 [2024-11-06 14:32:31.762313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102944 len:8 PRP1 0x0 PRP2 0x0 00:28:25.301 [2024-11-06 14:32:31.762328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.301 [2024-11-06 14:32:31.762343] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:25.301 [2024-11-06 14:32:31.762354] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:25.301 [2024-11-06 14:32:31.762366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102952 len:8 PRP1 0x0 PRP2 0x0 00:28:25.301 [2024-11-06 14:32:31.762382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.301 [2024-11-06 14:32:31.762396] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:25.301 [2024-11-06 14:32:31.762407] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:25.301 [2024-11-06 14:32:31.762420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102960 len:8 PRP1 0x0 PRP2 0x0 00:28:25.301 [2024-11-06 14:32:31.762434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.301 [2024-11-06 14:32:31.762450] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:25.301 [2024-11-06 14:32:31.762461] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:25.301 [2024-11-06 14:32:31.762473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102968 len:8 PRP1 0x0 PRP2 0x0 00:28:25.301 [2024-11-06 14:32:31.762488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.301 [2024-11-06 14:32:31.762508] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:25.301 [2024-11-06 14:32:31.762519] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:25.301 [2024-11-06 14:32:31.762541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102976 len:8 PRP1 0x0 PRP2 0x0 00:28:25.301 [2024-11-06 14:32:31.762557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.301 [2024-11-06 14:32:31.762572] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:25.301 [2024-11-06 14:32:31.762583] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:25.301 [2024-11-06 14:32:31.762596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102984 len:8 PRP1 0x0 PRP2 0x0 00:28:25.301 [2024-11-06 14:32:31.762612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.301 [2024-11-06 14:32:31.762627] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:25.301 [2024-11-06 14:32:31.762639] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:25.301 [2024-11-06 14:32:31.762652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102992 len:8 PRP1 0x0 PRP2 0x0 00:28:25.301 [2024-11-06 14:32:31.762667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.301 [2024-11-06 14:32:31.762682] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:25.301 [2024-11-06 14:32:31.762693] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:25.301 [2024-11-06 14:32:31.762705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103000 len:8 PRP1 0x0 PRP2 0x0 00:28:25.301 [2024-11-06 14:32:31.762720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.301 [2024-11-06 14:32:31.762735] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:25.301 [2024-11-06 14:32:31.762749] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:25.301 [2024-11-06 14:32:31.762762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103008 len:8 PRP1 0x0 PRP2 0x0 00:28:25.301 [2024-11-06 14:32:31.762777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.301 [2024-11-06 14:32:31.764165] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:25.301 [2024-11-06 14:32:31.764253] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.301 [2024-11-06 14:32:31.764276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.301 [2024-11-06 14:32:31.764319] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b500 (9): Bad file descriptor 00:28:25.301 [2024-11-06 14:32:31.764717] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.301 [2024-11-06 14:32:31.764751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002b500 with addr=10.0.0.3, port=4421 00:28:25.301 [2024-11-06 14:32:31.764770] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b500 is same with the state(6) to be set 00:28:25.301 [2024-11-06 14:32:31.764819] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b500 (9): Bad file descriptor 00:28:25.301 [2024-11-06 14:32:31.764863] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:25.301 [2024-11-06 14:32:31.764896] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:25.301 [2024-11-06 14:32:31.764914] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:25.301 [2024-11-06 14:32:31.764932] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:25.301 [2024-11-06 14:32:31.764950] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:25.301 7037.77 IOPS, 27.49 MiB/s [2024-11-06T14:32:52.936Z] 7111.83 IOPS, 27.78 MiB/s [2024-11-06T14:32:52.936Z] 7177.92 IOPS, 28.04 MiB/s [2024-11-06T14:32:52.936Z] 7243.42 IOPS, 28.29 MiB/s [2024-11-06T14:32:52.936Z] 7304.56 IOPS, 28.53 MiB/s [2024-11-06T14:32:52.936Z] 7363.25 IOPS, 28.76 MiB/s [2024-11-06T14:32:52.936Z] 7418.98 IOPS, 28.98 MiB/s [2024-11-06T14:32:52.936Z] 7470.90 IOPS, 29.18 MiB/s [2024-11-06T14:32:52.936Z] 7519.86 IOPS, 29.37 MiB/s [2024-11-06T14:32:52.936Z] 7567.14 IOPS, 29.56 MiB/s [2024-11-06T14:32:52.936Z] [2024-11-06 14:32:41.805862] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:28:25.301 7612.07 IOPS, 29.73 MiB/s [2024-11-06T14:32:52.936Z] 7655.46 IOPS, 29.90 MiB/s [2024-11-06T14:32:52.936Z] 7697.17 IOPS, 30.07 MiB/s [2024-11-06T14:32:52.936Z] 7737.23 IOPS, 30.22 MiB/s [2024-11-06T14:32:52.936Z] 7771.57 IOPS, 30.36 MiB/s [2024-11-06T14:32:52.936Z] 7807.82 IOPS, 30.50 MiB/s [2024-11-06T14:32:52.936Z] 7842.47 IOPS, 30.63 MiB/s [2024-11-06T14:32:52.936Z] 7875.60 IOPS, 30.76 MiB/s [2024-11-06T14:32:52.936Z] 7906.85 IOPS, 30.89 MiB/s [2024-11-06T14:32:52.936Z] 7937.74 IOPS, 31.01 MiB/s [2024-11-06T14:32:52.936Z] Received shutdown signal, test time was about 54.620250 seconds 00:28:25.301 00:28:25.301 Latency(us) 00:28:25.301 [2024-11-06T14:32:52.936Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:25.301 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:28:25.301 Verification LBA range: start 0x0 length 0x4000 00:28:25.302 Nvme0n1 : 54.62 7954.86 31.07 0.00 0.00 16073.07 835.65 7061253.96 00:28:25.302 [2024-11-06T14:32:52.937Z] =================================================================================================================== 00:28:25.302 [2024-11-06T14:32:52.937Z] Total : 7954.86 31.07 0.00 0.00 16073.07 835.65 7061253.96 00:28:25.561 14:32:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:28:25.561 14:32:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:28:25.561 14:32:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:28:25.561 14:32:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:25.561 14:32:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 00:28:25.561 14:32:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:25.561 14:32:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 00:28:25.561 14:32:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:25.561 14:32:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:25.561 rmmod nvme_tcp 00:28:25.561 rmmod nvme_fabrics 00:28:25.561 rmmod nvme_keyring 00:28:25.561 14:32:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:25.561 14:32:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 00:28:25.561 14:32:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 00:28:25.561 14:32:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@517 -- # '[' -n 87822 ']' 00:28:25.561 14:32:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@518 -- # killprocess 87822 00:28:25.561 14:32:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@952 -- # '[' -z 87822 ']' 00:28:25.561 14:32:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # kill -0 87822 00:28:25.561 14:32:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@957 -- # uname 00:28:25.835 14:32:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:25.835 14:32:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 87822 00:28:25.835 14:32:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:25.835 14:32:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:25.835 killing process with pid 87822 00:28:25.835 14:32:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@970 -- # echo 'killing process with pid 87822' 00:28:25.835 14:32:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@971 -- # kill 87822 00:28:25.835 14:32:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@976 -- # wait 87822 00:28:27.212 14:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:27.212 14:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:27.212 14:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:27.212 14:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 00:28:27.212 14:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-save 00:28:27.212 14:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:27.212 14:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:28:27.212 14:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:27.212 14:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:28:27.213 14:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:28:27.213 14:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:28:27.213 14:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:28:27.213 14:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:28:27.213 14:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:28:27.213 14:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:28:27.213 14:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:28:27.213 14:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:28:27.213 14:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:28:27.213 14:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:28:27.213 14:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:28:27.471 14:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:27.471 14:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:27.471 14:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:28:27.471 14:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:27.471 14:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:27.471 14:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:27.471 14:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 00:28:27.471 ************************************ 00:28:27.471 END TEST nvmf_host_multipath 00:28:27.471 ************************************ 00:28:27.471 00:28:27.471 real 1m2.674s 00:28:27.471 user 2m47.504s 00:28:27.471 sys 0m21.826s 00:28:27.471 14:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:27.471 14:32:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:28:27.471 14:32:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:28:27.471 14:32:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:28:27.471 14:32:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:27.471 14:32:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.471 ************************************ 00:28:27.472 START TEST nvmf_timeout 00:28:27.472 ************************************ 00:28:27.472 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:28:27.731 * Looking for test storage... 00:28:27.731 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:28:27.731 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:27.731 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1691 -- # lcov --version 00:28:27.731 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:27.731 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:27.731 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:27.731 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:27.731 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:27.731 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:28:27.731 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:28:27.731 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:28:27.731 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:28:27.731 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:28:27.731 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:28:27.731 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:28:27.731 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:27.731 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:28:27.731 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:28:27.731 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:27.731 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:27.731 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:28:27.731 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:28:27.731 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:27.731 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:28:27.731 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:28:27.731 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:28:27.731 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:28:27.731 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:27.731 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:28:27.731 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:28:27.731 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:27.731 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:27.731 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:28:27.731 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:27.731 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:27.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:27.731 --rc genhtml_branch_coverage=1 00:28:27.731 --rc genhtml_function_coverage=1 00:28:27.731 --rc genhtml_legend=1 00:28:27.731 --rc geninfo_all_blocks=1 00:28:27.731 --rc geninfo_unexecuted_blocks=1 00:28:27.731 00:28:27.731 ' 00:28:27.731 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:27.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:27.731 --rc genhtml_branch_coverage=1 00:28:27.731 --rc genhtml_function_coverage=1 00:28:27.731 --rc genhtml_legend=1 00:28:27.731 --rc geninfo_all_blocks=1 00:28:27.731 --rc geninfo_unexecuted_blocks=1 00:28:27.731 00:28:27.731 ' 00:28:27.731 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:27.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:27.731 --rc genhtml_branch_coverage=1 00:28:27.731 --rc genhtml_function_coverage=1 00:28:27.731 --rc genhtml_legend=1 00:28:27.731 --rc geninfo_all_blocks=1 00:28:27.731 --rc geninfo_unexecuted_blocks=1 00:28:27.731 00:28:27.731 ' 00:28:27.731 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:27.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:27.731 --rc genhtml_branch_coverage=1 00:28:27.731 --rc genhtml_function_coverage=1 00:28:27.731 --rc genhtml_legend=1 00:28:27.731 --rc geninfo_all_blocks=1 00:28:27.731 --rc geninfo_unexecuted_blocks=1 00:28:27.731 00:28:27.731 ' 00:28:27.732 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:27.732 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:28:27.732 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:27.732 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:27.732 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:27.732 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:27.732 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:27.732 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:27.732 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:27.732 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:27.732 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:27.732 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:27.732 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:28:27.732 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:28:27.732 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:27.732 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:27.732 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:27.732 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:27.732 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:27.732 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:28:27.732 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:27.732 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:27.732 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:27.732 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.732 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.732 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.732 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:28:27.732 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.732 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 00:28:27.732 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:27.732 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:27.732 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:27.732 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:27.732 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:27.732 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:27.732 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:27.732 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:27.732 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:27.732 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:27.732 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:27.732 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:27.732 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:27.732 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:28:27.732 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:27.732 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:28:27.732 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:27.732 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:27.732 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:27.732 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:27.732 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:27.732 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:27.732 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:27.732 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:27.732 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:28:27.732 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:28:27.732 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:28:27.732 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:28:27.732 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:28:27.732 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 00:28:27.732 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:27.732 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:28:27.732 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:28:27.732 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:28:27.732 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:27.732 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:28:27.732 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:27.732 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:28:27.732 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:27.732 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:28:27.732 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:27.732 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:27.732 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:27.732 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:27.732 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:27.732 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:27.732 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:28:27.732 Cannot find device "nvmf_init_br" 00:28:27.732 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:28:27.732 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:28:27.732 Cannot find device "nvmf_init_br2" 00:28:27.732 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:28:27.732 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:28:27.991 Cannot find device "nvmf_tgt_br" 00:28:27.991 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 00:28:27.991 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:28:27.991 Cannot find device "nvmf_tgt_br2" 00:28:27.991 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 00:28:27.991 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:28:27.991 Cannot find device "nvmf_init_br" 00:28:27.991 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:28:27.991 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:28:27.991 Cannot find device "nvmf_init_br2" 00:28:27.991 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:28:27.991 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:28:27.991 Cannot find device "nvmf_tgt_br" 00:28:27.991 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 00:28:27.991 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:28:27.991 Cannot find device "nvmf_tgt_br2" 00:28:27.991 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 00:28:27.991 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:28:27.991 Cannot find device "nvmf_br" 00:28:27.991 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 00:28:27.991 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:28:27.991 Cannot find device "nvmf_init_if" 00:28:27.991 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 00:28:27.991 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:28:27.991 Cannot find device "nvmf_init_if2" 00:28:27.992 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 00:28:27.992 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:27.992 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:27.992 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 00:28:27.992 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:27.992 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:27.992 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 00:28:27.992 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:28:27.992 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:27.992 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:28:27.992 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:27.992 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:27.992 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:27.992 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:27.992 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:27.992 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:28:27.992 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:28:27.992 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:28:27.992 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:28:28.250 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:28:28.251 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:28:28.251 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:28:28.251 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:28:28.251 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:28:28.251 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:28.251 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:28.251 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:28.251 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:28:28.251 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:28:28.251 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:28:28.251 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:28:28.251 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:28.251 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:28.251 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:28.251 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:28:28.251 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:28:28.251 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:28:28.251 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:28.251 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:28:28.251 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:28:28.251 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:28.251 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 00:28:28.251 00:28:28.251 --- 10.0.0.3 ping statistics --- 00:28:28.251 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:28.251 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:28:28.251 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:28:28.251 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:28:28.251 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.033 ms 00:28:28.251 00:28:28.251 --- 10.0.0.4 ping statistics --- 00:28:28.251 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:28.251 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:28:28.251 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:28.251 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:28.251 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.016 ms 00:28:28.251 00:28:28.251 --- 10.0.0.1 ping statistics --- 00:28:28.251 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:28.251 rtt min/avg/max/mdev = 0.016/0.016/0.016/0.000 ms 00:28:28.251 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:28:28.251 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:28.251 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.031 ms 00:28:28.251 00:28:28.251 --- 10.0.0.2 ping statistics --- 00:28:28.251 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:28.251 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:28:28.251 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:28.251 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@461 -- # return 0 00:28:28.251 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:28.251 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:28.251 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:28.251 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:28.251 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:28.251 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:28.251 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:28.251 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:28:28.251 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:28.251 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:28.251 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:28.251 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@509 -- # nvmfpid=89044 00:28:28.251 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:28:28.251 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@510 -- # waitforlisten 89044 00:28:28.251 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@833 -- # '[' -z 89044 ']' 00:28:28.251 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:28.251 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:28.251 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:28.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:28.251 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:28.251 14:32:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:28.510 [2024-11-06 14:32:55.922943] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:28:28.510 [2024-11-06 14:32:55.923056] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:28.510 [2024-11-06 14:32:56.107313] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:28.768 [2024-11-06 14:32:56.250322] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:28.768 [2024-11-06 14:32:56.250375] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:28.768 [2024-11-06 14:32:56.250392] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:28.768 [2024-11-06 14:32:56.250414] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:28.768 [2024-11-06 14:32:56.250428] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:28.768 [2024-11-06 14:32:56.252721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:28.768 [2024-11-06 14:32:56.252757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:29.027 [2024-11-06 14:32:56.503000] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:28:29.286 14:32:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:29.286 14:32:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@866 -- # return 0 00:28:29.286 14:32:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:29.286 14:32:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:29.286 14:32:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:29.286 14:32:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:29.286 14:32:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:29.286 14:32:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:29.544 [2024-11-06 14:32:56.979110] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:29.545 14:32:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:28:29.803 Malloc0 00:28:29.804 14:32:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:30.062 14:32:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:30.320 14:32:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:28:30.320 [2024-11-06 14:32:57.925378] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:28:30.320 14:32:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=89093 00:28:30.320 14:32:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:28:30.320 14:32:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 89093 /var/tmp/bdevperf.sock 00:28:30.320 14:32:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@833 -- # '[' -z 89093 ']' 00:28:30.320 14:32:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:30.320 14:32:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:30.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:30.320 14:32:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:30.320 14:32:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:30.320 14:32:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:30.579 [2024-11-06 14:32:58.049734] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:28:30.579 [2024-11-06 14:32:58.049869] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89093 ] 00:28:30.839 [2024-11-06 14:32:58.231457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:30.839 [2024-11-06 14:32:58.352796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:31.098 [2024-11-06 14:32:58.556817] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:28:31.356 14:32:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:31.356 14:32:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@866 -- # return 0 00:28:31.356 14:32:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:28:31.615 14:32:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:28:31.874 NVMe0n1 00:28:31.874 14:32:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=89117 00:28:31.874 14:32:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:31.874 14:32:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:28:31.874 Running I/O for 10 seconds... 00:28:32.819 14:33:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:28:33.081 8296.00 IOPS, 32.41 MiB/s [2024-11-06T14:33:00.716Z] [2024-11-06 14:33:00.517609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:74944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.081 [2024-11-06 14:33:00.517680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.081 [2024-11-06 14:33:00.517715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:74952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.081 [2024-11-06 14:33:00.517729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.081 [2024-11-06 14:33:00.517747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:74960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.081 [2024-11-06 14:33:00.517760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.081 [2024-11-06 14:33:00.517778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:74968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.081 [2024-11-06 14:33:00.517790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.081 [2024-11-06 14:33:00.517811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:74976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.081 [2024-11-06 14:33:00.517822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.081 [2024-11-06 14:33:00.517852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:74984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.081 [2024-11-06 14:33:00.517865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.081 [2024-11-06 14:33:00.517882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:74992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.081 [2024-11-06 14:33:00.517895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.081 [2024-11-06 14:33:00.517912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:75000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.081 [2024-11-06 14:33:00.517924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.081 [2024-11-06 14:33:00.517941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:75008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.081 [2024-11-06 14:33:00.517953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.081 [2024-11-06 14:33:00.517970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:75016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.081 [2024-11-06 14:33:00.517982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.081 [2024-11-06 14:33:00.517998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:75024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.081 [2024-11-06 14:33:00.518010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.081 [2024-11-06 14:33:00.518027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:75032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.081 [2024-11-06 14:33:00.518038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.081 [2024-11-06 14:33:00.518061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:74368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.081 [2024-11-06 14:33:00.518073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.081 [2024-11-06 14:33:00.518110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:74376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.081 [2024-11-06 14:33:00.518124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.081 [2024-11-06 14:33:00.518144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:74384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.081 [2024-11-06 14:33:00.518156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.081 [2024-11-06 14:33:00.518173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:74392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.081 [2024-11-06 14:33:00.518185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.081 [2024-11-06 14:33:00.518202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:74400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.081 [2024-11-06 14:33:00.518215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.081 [2024-11-06 14:33:00.518232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:74408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.081 [2024-11-06 14:33:00.518244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.081 [2024-11-06 14:33:00.518260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:74416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.081 [2024-11-06 14:33:00.518272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.081 [2024-11-06 14:33:00.518288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:74424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.081 [2024-11-06 14:33:00.518300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.081 [2024-11-06 14:33:00.518321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:74432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.081 [2024-11-06 14:33:00.518333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.081 [2024-11-06 14:33:00.518359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:74440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.081 [2024-11-06 14:33:00.518371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.081 [2024-11-06 14:33:00.518390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:74448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.081 [2024-11-06 14:33:00.518403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.081 [2024-11-06 14:33:00.518419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:74456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.081 [2024-11-06 14:33:00.518431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.081 [2024-11-06 14:33:00.518448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:74464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.081 [2024-11-06 14:33:00.518460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.081 [2024-11-06 14:33:00.518477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:74472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.081 [2024-11-06 14:33:00.518491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.081 [2024-11-06 14:33:00.518508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:74480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.081 [2024-11-06 14:33:00.518520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.081 [2024-11-06 14:33:00.518536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:74488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.081 [2024-11-06 14:33:00.518556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.081 [2024-11-06 14:33:00.518576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:75040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.081 [2024-11-06 14:33:00.518588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.081 [2024-11-06 14:33:00.518604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:75048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.081 [2024-11-06 14:33:00.518617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.081 [2024-11-06 14:33:00.518634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:75056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.081 [2024-11-06 14:33:00.518646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.081 [2024-11-06 14:33:00.518662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:75064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.081 [2024-11-06 14:33:00.518674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.081 [2024-11-06 14:33:00.518690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:75072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.081 [2024-11-06 14:33:00.518701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.081 [2024-11-06 14:33:00.518718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:75080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.081 [2024-11-06 14:33:00.518730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.081 [2024-11-06 14:33:00.518746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:75088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.081 [2024-11-06 14:33:00.518758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.081 [2024-11-06 14:33:00.518774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:75096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.081 [2024-11-06 14:33:00.518786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.081 [2024-11-06 14:33:00.518804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:75104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.081 [2024-11-06 14:33:00.518816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.081 [2024-11-06 14:33:00.518832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:75112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.082 [2024-11-06 14:33:00.518855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.082 [2024-11-06 14:33:00.518874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:75120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.082 [2024-11-06 14:33:00.518886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.082 [2024-11-06 14:33:00.518902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:75128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.082 [2024-11-06 14:33:00.518915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.082 [2024-11-06 14:33:00.518931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:75136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.082 [2024-11-06 14:33:00.518944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.082 [2024-11-06 14:33:00.518960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:75144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.082 [2024-11-06 14:33:00.518971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.082 [2024-11-06 14:33:00.518987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:74496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.082 [2024-11-06 14:33:00.518999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.082 [2024-11-06 14:33:00.519015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:74504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.082 [2024-11-06 14:33:00.519026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.082 [2024-11-06 14:33:00.519045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:74512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.082 [2024-11-06 14:33:00.519057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.082 [2024-11-06 14:33:00.519074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:74520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.082 [2024-11-06 14:33:00.519086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.082 [2024-11-06 14:33:00.519120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:74528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.082 [2024-11-06 14:33:00.519132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.082 [2024-11-06 14:33:00.519148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:74536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.082 [2024-11-06 14:33:00.519161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.082 [2024-11-06 14:33:00.519177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:74544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.082 [2024-11-06 14:33:00.519190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.082 [2024-11-06 14:33:00.519207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:74552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.082 [2024-11-06 14:33:00.519219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.082 [2024-11-06 14:33:00.519236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:75152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.082 [2024-11-06 14:33:00.519249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.082 [2024-11-06 14:33:00.519268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:75160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.082 [2024-11-06 14:33:00.519281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.082 [2024-11-06 14:33:00.519300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:75168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.082 [2024-11-06 14:33:00.519312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.082 [2024-11-06 14:33:00.519328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:75176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.082 [2024-11-06 14:33:00.519340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.082 [2024-11-06 14:33:00.519380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:75184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.082 [2024-11-06 14:33:00.519392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.082 [2024-11-06 14:33:00.519409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:75192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.082 [2024-11-06 14:33:00.519421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.082 [2024-11-06 14:33:00.519437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:75200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.082 [2024-11-06 14:33:00.519449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.082 [2024-11-06 14:33:00.519465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:75208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.082 [2024-11-06 14:33:00.519477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.082 [2024-11-06 14:33:00.519494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:75216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.082 [2024-11-06 14:33:00.519505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.082 [2024-11-06 14:33:00.519522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:75224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.082 [2024-11-06 14:33:00.519533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.082 [2024-11-06 14:33:00.519552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:75232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.082 [2024-11-06 14:33:00.519564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.082 [2024-11-06 14:33:00.519581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:75240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.082 [2024-11-06 14:33:00.519593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.082 [2024-11-06 14:33:00.519609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:75248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.082 [2024-11-06 14:33:00.519621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.082 [2024-11-06 14:33:00.519638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:75256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.082 [2024-11-06 14:33:00.519650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.082 [2024-11-06 14:33:00.519667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:74560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.082 [2024-11-06 14:33:00.519679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.082 [2024-11-06 14:33:00.519698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:74568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.082 [2024-11-06 14:33:00.519710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.082 [2024-11-06 14:33:00.519727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:74576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.082 [2024-11-06 14:33:00.519739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.082 [2024-11-06 14:33:00.519755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:74584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.082 [2024-11-06 14:33:00.519766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.082 [2024-11-06 14:33:00.519786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:74592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.082 [2024-11-06 14:33:00.519798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.082 [2024-11-06 14:33:00.519814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:74600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.082 [2024-11-06 14:33:00.519825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.082 [2024-11-06 14:33:00.519851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:74608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.082 [2024-11-06 14:33:00.519863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.082 [2024-11-06 14:33:00.519879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:74616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.082 [2024-11-06 14:33:00.519890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.082 [2024-11-06 14:33:00.519907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:74624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.082 [2024-11-06 14:33:00.519918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.082 [2024-11-06 14:33:00.519934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:74632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.082 [2024-11-06 14:33:00.519946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.082 [2024-11-06 14:33:00.519963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:74640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.082 [2024-11-06 14:33:00.519975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.082 [2024-11-06 14:33:00.519991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:74648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.082 [2024-11-06 14:33:00.520004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.082 [2024-11-06 14:33:00.520022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:74656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.082 [2024-11-06 14:33:00.520033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.082 [2024-11-06 14:33:00.520052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:74664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.083 [2024-11-06 14:33:00.520064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.083 [2024-11-06 14:33:00.520081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:74672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.083 [2024-11-06 14:33:00.520092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.083 [2024-11-06 14:33:00.520109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:74680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.083 [2024-11-06 14:33:00.520121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.083 [2024-11-06 14:33:00.520137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:74688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.083 [2024-11-06 14:33:00.520150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.083 [2024-11-06 14:33:00.520166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:74696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.083 [2024-11-06 14:33:00.520178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.083 [2024-11-06 14:33:00.520195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:74704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.083 [2024-11-06 14:33:00.520206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.083 [2024-11-06 14:33:00.520223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:74712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.083 [2024-11-06 14:33:00.520236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.083 [2024-11-06 14:33:00.520254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:74720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.083 [2024-11-06 14:33:00.520266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.083 [2024-11-06 14:33:00.520282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:74728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.083 [2024-11-06 14:33:00.520293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.083 [2024-11-06 14:33:00.520309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:74736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.083 [2024-11-06 14:33:00.520321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.083 [2024-11-06 14:33:00.520337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:74744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.083 [2024-11-06 14:33:00.520348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.083 [2024-11-06 14:33:00.520365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:75264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.083 [2024-11-06 14:33:00.520376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.083 [2024-11-06 14:33:00.520392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:75272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.083 [2024-11-06 14:33:00.520404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.083 [2024-11-06 14:33:00.520421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:75280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.083 [2024-11-06 14:33:00.520433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.083 [2024-11-06 14:33:00.520450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:75288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.083 [2024-11-06 14:33:00.520461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.083 [2024-11-06 14:33:00.520480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:75296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.083 [2024-11-06 14:33:00.520491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.083 [2024-11-06 14:33:00.520506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:75304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.083 [2024-11-06 14:33:00.520518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.083 [2024-11-06 14:33:00.520534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:75312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.083 [2024-11-06 14:33:00.520546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.083 [2024-11-06 14:33:00.520561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:75320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.083 [2024-11-06 14:33:00.520573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.083 [2024-11-06 14:33:00.520588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:74752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.083 [2024-11-06 14:33:00.520601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.083 [2024-11-06 14:33:00.520618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:74760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.083 [2024-11-06 14:33:00.520630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.083 [2024-11-06 14:33:00.520646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:74768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.083 [2024-11-06 14:33:00.520658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.083 [2024-11-06 14:33:00.520674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.083 [2024-11-06 14:33:00.520686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.083 [2024-11-06 14:33:00.520706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:74784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.083 [2024-11-06 14:33:00.520718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.083 [2024-11-06 14:33:00.520735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:74792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.083 [2024-11-06 14:33:00.520747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.083 [2024-11-06 14:33:00.520762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:74800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.083 [2024-11-06 14:33:00.520774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.083 [2024-11-06 14:33:00.520790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.083 [2024-11-06 14:33:00.520802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.083 [2024-11-06 14:33:00.520821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:74816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.083 [2024-11-06 14:33:00.520832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.083 [2024-11-06 14:33:00.520857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:74824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.083 [2024-11-06 14:33:00.520869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.083 [2024-11-06 14:33:00.520885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:74832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.083 [2024-11-06 14:33:00.520897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.083 [2024-11-06 14:33:00.520913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:74840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.083 [2024-11-06 14:33:00.520925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.083 [2024-11-06 14:33:00.520944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:74848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.083 [2024-11-06 14:33:00.520955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.083 [2024-11-06 14:33:00.520971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:74856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.083 [2024-11-06 14:33:00.520983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.083 [2024-11-06 14:33:00.521009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.083 [2024-11-06 14:33:00.521021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.083 [2024-11-06 14:33:00.521037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:74872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.083 [2024-11-06 14:33:00.521048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.083 [2024-11-06 14:33:00.521065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:75328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.083 [2024-11-06 14:33:00.521077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.083 [2024-11-06 14:33:00.521094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:75336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.083 [2024-11-06 14:33:00.521106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.083 [2024-11-06 14:33:00.521123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:75344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.083 [2024-11-06 14:33:00.521135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.083 [2024-11-06 14:33:00.521155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.083 [2024-11-06 14:33:00.521166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.083 [2024-11-06 14:33:00.521187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:75360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.083 [2024-11-06 14:33:00.521198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.083 [2024-11-06 14:33:00.521216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:75368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.084 [2024-11-06 14:33:00.521228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.084 [2024-11-06 14:33:00.521244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:75376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.084 [2024-11-06 14:33:00.521257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.084 [2024-11-06 14:33:00.521274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:75384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.084 [2024-11-06 14:33:00.521285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.084 [2024-11-06 14:33:00.521302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:74880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.084 [2024-11-06 14:33:00.521313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.084 [2024-11-06 14:33:00.521329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:74888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.084 [2024-11-06 14:33:00.521341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.084 [2024-11-06 14:33:00.521356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:74896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.084 [2024-11-06 14:33:00.521368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.084 [2024-11-06 14:33:00.521384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:74904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.084 [2024-11-06 14:33:00.521396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.084 [2024-11-06 14:33:00.521415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:74912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.084 [2024-11-06 14:33:00.521427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.084 [2024-11-06 14:33:00.521443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:74920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.084 [2024-11-06 14:33:00.521455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.084 [2024-11-06 14:33:00.521471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:74928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.084 [2024-11-06 14:33:00.521482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.084 [2024-11-06 14:33:00.521497] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b280 is same with the state(6) to be set 00:28:33.084 [2024-11-06 14:33:00.521521] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:33.084 [2024-11-06 14:33:00.521535] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:33.084 [2024-11-06 14:33:00.521548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74936 len:8 PRP1 0x0 PRP2 0x0 00:28:33.084 [2024-11-06 14:33:00.521563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.084 [2024-11-06 14:33:00.522002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:33.084 [2024-11-06 14:33:00.522024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.084 [2024-11-06 14:33:00.522042] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:33.084 [2024-11-06 14:33:00.522065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.084 [2024-11-06 14:33:00.522085] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:33.084 [2024-11-06 14:33:00.522096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.084 [2024-11-06 14:33:00.522112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:33.084 [2024-11-06 14:33:00.522124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.084 [2024-11-06 14:33:00.522138] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:28:33.084 [2024-11-06 14:33:00.522338] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:28:33.084 [2024-11-06 14:33:00.522371] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:28:33.084 [2024-11-06 14:33:00.522511] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.084 [2024-11-06 14:33:00.522534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.3, port=4420 00:28:33.084 [2024-11-06 14:33:00.522563] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:28:33.084 [2024-11-06 14:33:00.522586] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:28:33.084 [2024-11-06 14:33:00.522609] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:28:33.084 [2024-11-06 14:33:00.522622] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:28:33.084 [2024-11-06 14:33:00.522643] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:28:33.084 [2024-11-06 14:33:00.522658] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:28:33.084 [2024-11-06 14:33:00.522679] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:28:33.084 14:33:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:28:34.958 4648.00 IOPS, 18.16 MiB/s [2024-11-06T14:33:02.593Z] 3098.67 IOPS, 12.10 MiB/s [2024-11-06T14:33:02.593Z] [2024-11-06 14:33:02.519679] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.958 [2024-11-06 14:33:02.519759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.3, port=4420 00:28:34.958 [2024-11-06 14:33:02.519787] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:28:34.958 [2024-11-06 14:33:02.519823] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:28:34.958 [2024-11-06 14:33:02.519865] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:28:34.958 [2024-11-06 14:33:02.519880] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:28:34.958 [2024-11-06 14:33:02.519900] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:28:34.958 [2024-11-06 14:33:02.519918] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:28:34.958 [2024-11-06 14:33:02.519938] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:28:34.958 14:33:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:28:34.958 14:33:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:34.958 14:33:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:28:35.217 14:33:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:28:35.217 14:33:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:28:35.217 14:33:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:28:35.217 14:33:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:28:35.476 14:33:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:28:35.476 14:33:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:28:36.979 2324.00 IOPS, 9.08 MiB/s [2024-11-06T14:33:04.614Z] 1859.20 IOPS, 7.26 MiB/s [2024-11-06T14:33:04.614Z] [2024-11-06 14:33:04.516936] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.979 [2024-11-06 14:33:04.517015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.3, port=4420 00:28:36.979 [2024-11-06 14:33:04.517039] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:28:36.979 [2024-11-06 14:33:04.517078] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:28:36.979 [2024-11-06 14:33:04.517111] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:28:36.979 [2024-11-06 14:33:04.517126] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:28:36.979 [2024-11-06 14:33:04.517146] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:28:36.979 [2024-11-06 14:33:04.517163] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:28:36.979 [2024-11-06 14:33:04.517182] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:28:38.854 1549.33 IOPS, 6.05 MiB/s [2024-11-06T14:33:06.749Z] 1328.00 IOPS, 5.19 MiB/s [2024-11-06T14:33:06.749Z] [2024-11-06 14:33:06.514021] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:28:39.114 [2024-11-06 14:33:06.514107] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:28:39.114 [2024-11-06 14:33:06.514124] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:28:39.114 [2024-11-06 14:33:06.514142] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:28:39.114 [2024-11-06 14:33:06.514160] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:28:40.051 1162.00 IOPS, 4.54 MiB/s 00:28:40.051 Latency(us) 00:28:40.051 [2024-11-06T14:33:07.686Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:40.051 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:28:40.051 Verification LBA range: start 0x0 length 0x4000 00:28:40.051 NVMe0n1 : 8.11 1146.72 4.48 15.79 0.00 110396.47 3421.56 7061253.96 00:28:40.051 [2024-11-06T14:33:07.686Z] =================================================================================================================== 00:28:40.051 [2024-11-06T14:33:07.686Z] Total : 1146.72 4.48 15.79 0.00 110396.47 3421.56 7061253.96 00:28:40.051 { 00:28:40.051 "results": [ 00:28:40.051 { 00:28:40.051 "job": "NVMe0n1", 00:28:40.051 "core_mask": "0x4", 00:28:40.051 "workload": "verify", 00:28:40.051 "status": "finished", 00:28:40.051 "verify_range": { 00:28:40.051 "start": 0, 00:28:40.051 "length": 16384 00:28:40.051 }, 00:28:40.051 "queue_depth": 128, 00:28:40.051 "io_size": 4096, 00:28:40.051 "runtime": 8.10662, 00:28:40.051 "iops": 1146.717127483464, 00:28:40.051 "mibps": 4.479363779232282, 00:28:40.051 "io_failed": 128, 00:28:40.051 "io_timeout": 0, 00:28:40.051 "avg_latency_us": 110396.47360920763, 00:28:40.051 "min_latency_us": 3421.5582329317267, 00:28:40.051 "max_latency_us": 7061253.963052209 00:28:40.051 } 00:28:40.051 ], 00:28:40.051 "core_count": 1 00:28:40.051 } 00:28:40.620 14:33:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:28:40.620 14:33:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:40.620 14:33:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:28:40.620 14:33:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:28:40.620 14:33:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:28:40.620 14:33:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:28:40.620 14:33:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:28:40.880 14:33:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:28:40.880 14:33:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 89117 00:28:40.880 14:33:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 89093 00:28:40.880 14:33:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@952 -- # '[' -z 89093 ']' 00:28:40.880 14:33:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # kill -0 89093 00:28:40.880 14:33:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # uname 00:28:40.880 14:33:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:40.880 14:33:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 89093 00:28:40.880 14:33:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:28:40.880 killing process with pid 89093 00:28:40.880 14:33:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:28:40.880 14:33:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@970 -- # echo 'killing process with pid 89093' 00:28:40.880 14:33:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@971 -- # kill 89093 00:28:40.880 Received shutdown signal, test time was about 9.027833 seconds 00:28:40.880 00:28:40.880 Latency(us) 00:28:40.880 [2024-11-06T14:33:08.515Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:40.880 [2024-11-06T14:33:08.515Z] =================================================================================================================== 00:28:40.880 [2024-11-06T14:33:08.515Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:40.880 14:33:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@976 -- # wait 89093 00:28:42.257 14:33:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:28:42.257 [2024-11-06 14:33:09.747076] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:28:42.257 14:33:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=89246 00:28:42.257 14:33:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:28:42.257 14:33:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 89246 /var/tmp/bdevperf.sock 00:28:42.257 14:33:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@833 -- # '[' -z 89246 ']' 00:28:42.257 14:33:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:42.257 14:33:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:42.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:42.257 14:33:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:42.257 14:33:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:42.257 14:33:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:42.257 [2024-11-06 14:33:09.863611] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:28:42.257 [2024-11-06 14:33:09.863736] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89246 ] 00:28:42.516 [2024-11-06 14:33:10.047860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:42.789 [2024-11-06 14:33:10.165087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:42.789 [2024-11-06 14:33:10.378923] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:28:43.357 14:33:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:43.357 14:33:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@866 -- # return 0 00:28:43.357 14:33:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:28:43.357 14:33:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:28:43.616 NVMe0n1 00:28:43.616 14:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=89264 00:28:43.616 14:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:43.616 14:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:28:43.875 Running I/O for 10 seconds... 00:28:44.815 14:33:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:28:44.815 9425.00 IOPS, 36.82 MiB/s [2024-11-06T14:33:12.450Z] [2024-11-06 14:33:12.398975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:84552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.815 [2024-11-06 14:33:12.399049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.815 [2024-11-06 14:33:12.399083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:84560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.815 [2024-11-06 14:33:12.399096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.815 [2024-11-06 14:33:12.399116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.815 [2024-11-06 14:33:12.399129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.815 [2024-11-06 14:33:12.399146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.815 [2024-11-06 14:33:12.399158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.815 [2024-11-06 14:33:12.399179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:84584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.815 [2024-11-06 14:33:12.399192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.816 [2024-11-06 14:33:12.399209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:84592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.816 [2024-11-06 14:33:12.399221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.816 [2024-11-06 14:33:12.399238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:84600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.816 [2024-11-06 14:33:12.399251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.816 [2024-11-06 14:33:12.399269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:84608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.816 [2024-11-06 14:33:12.399281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.816 [2024-11-06 14:33:12.399302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:84104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.816 [2024-11-06 14:33:12.399315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.816 [2024-11-06 14:33:12.399332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:84112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.816 [2024-11-06 14:33:12.399345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.816 [2024-11-06 14:33:12.399363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:84120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.816 [2024-11-06 14:33:12.399375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.816 [2024-11-06 14:33:12.399392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:84128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.816 [2024-11-06 14:33:12.399404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.816 [2024-11-06 14:33:12.399423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:84136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.816 [2024-11-06 14:33:12.399436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.816 [2024-11-06 14:33:12.399452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:84144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.816 [2024-11-06 14:33:12.399465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.816 [2024-11-06 14:33:12.399483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:84152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.816 [2024-11-06 14:33:12.399495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.816 [2024-11-06 14:33:12.399512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:84160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.816 [2024-11-06 14:33:12.399524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.816 [2024-11-06 14:33:12.399541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:84168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.816 [2024-11-06 14:33:12.399553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.816 [2024-11-06 14:33:12.399573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:84176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.816 [2024-11-06 14:33:12.399586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.816 [2024-11-06 14:33:12.399604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:84184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.816 [2024-11-06 14:33:12.399616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.816 [2024-11-06 14:33:12.399633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:84192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.816 [2024-11-06 14:33:12.399646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.816 [2024-11-06 14:33:12.399666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:84200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.816 [2024-11-06 14:33:12.399679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.816 [2024-11-06 14:33:12.399698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:84208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.816 [2024-11-06 14:33:12.399710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.816 [2024-11-06 14:33:12.399727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:84216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.816 [2024-11-06 14:33:12.399739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.816 [2024-11-06 14:33:12.399756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:84224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.816 [2024-11-06 14:33:12.399768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.816 [2024-11-06 14:33:12.399785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:84232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.816 [2024-11-06 14:33:12.399796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.816 [2024-11-06 14:33:12.399813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:84240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.816 [2024-11-06 14:33:12.399825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.816 [2024-11-06 14:33:12.399854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:84248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.816 [2024-11-06 14:33:12.399866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.816 [2024-11-06 14:33:12.399883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:84256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.816 [2024-11-06 14:33:12.399896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.816 [2024-11-06 14:33:12.399914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:84264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.817 [2024-11-06 14:33:12.399926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.817 [2024-11-06 14:33:12.399942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:84272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.817 [2024-11-06 14:33:12.399955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.817 [2024-11-06 14:33:12.399971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:84280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.817 [2024-11-06 14:33:12.399983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.817 [2024-11-06 14:33:12.400000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:84288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.817 [2024-11-06 14:33:12.400012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.817 [2024-11-06 14:33:12.400029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:84616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.817 [2024-11-06 14:33:12.400041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.817 [2024-11-06 14:33:12.400059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:84624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.817 [2024-11-06 14:33:12.400071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.817 [2024-11-06 14:33:12.400089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:84632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.817 [2024-11-06 14:33:12.400101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.817 [2024-11-06 14:33:12.400119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:84640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.817 [2024-11-06 14:33:12.400131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.817 [2024-11-06 14:33:12.400151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:84648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.817 [2024-11-06 14:33:12.400164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.817 [2024-11-06 14:33:12.400180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:84656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.817 [2024-11-06 14:33:12.400192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.817 [2024-11-06 14:33:12.400209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:84664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.817 [2024-11-06 14:33:12.400221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.817 [2024-11-06 14:33:12.400238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:84672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.817 [2024-11-06 14:33:12.400251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.817 [2024-11-06 14:33:12.400269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.817 [2024-11-06 14:33:12.400280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.817 [2024-11-06 14:33:12.400296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:84688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.817 [2024-11-06 14:33:12.400309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.817 [2024-11-06 14:33:12.400326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:84696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.817 [2024-11-06 14:33:12.400338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.817 [2024-11-06 14:33:12.400355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:84704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.817 [2024-11-06 14:33:12.400367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.817 [2024-11-06 14:33:12.400386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:84712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.817 [2024-11-06 14:33:12.400398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.817 [2024-11-06 14:33:12.400415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:84720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.817 [2024-11-06 14:33:12.400427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.817 [2024-11-06 14:33:12.400468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:84728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.817 [2024-11-06 14:33:12.400480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.817 [2024-11-06 14:33:12.400498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:84736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.817 [2024-11-06 14:33:12.400510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.817 [2024-11-06 14:33:12.400526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:84296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.818 [2024-11-06 14:33:12.400538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.818 [2024-11-06 14:33:12.400563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:84304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.818 [2024-11-06 14:33:12.400581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.818 [2024-11-06 14:33:12.400598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:84312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.818 [2024-11-06 14:33:12.400611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.818 [2024-11-06 14:33:12.400627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:84320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.818 [2024-11-06 14:33:12.400640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.818 [2024-11-06 14:33:12.400660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:84328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.818 [2024-11-06 14:33:12.400672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.818 [2024-11-06 14:33:12.400688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:84336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.818 [2024-11-06 14:33:12.400700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.818 [2024-11-06 14:33:12.400715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:84344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.818 [2024-11-06 14:33:12.400727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.818 [2024-11-06 14:33:12.400744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:84352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.818 [2024-11-06 14:33:12.400756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.818 [2024-11-06 14:33:12.400772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:84360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.818 [2024-11-06 14:33:12.400784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.818 [2024-11-06 14:33:12.400799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:84368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.818 [2024-11-06 14:33:12.400811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.818 [2024-11-06 14:33:12.400828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:84376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.818 [2024-11-06 14:33:12.400852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.818 [2024-11-06 14:33:12.400870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:84384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.818 [2024-11-06 14:33:12.400882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.818 [2024-11-06 14:33:12.400902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:84392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.818 [2024-11-06 14:33:12.400914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.818 [2024-11-06 14:33:12.400931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:84400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.818 [2024-11-06 14:33:12.400943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.818 [2024-11-06 14:33:12.400959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:84408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.818 [2024-11-06 14:33:12.400974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.818 [2024-11-06 14:33:12.400991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.818 [2024-11-06 14:33:12.401003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.818 [2024-11-06 14:33:12.401019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:84744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.818 [2024-11-06 14:33:12.401032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.818 [2024-11-06 14:33:12.401049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:84752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.818 [2024-11-06 14:33:12.401062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.818 [2024-11-06 14:33:12.401079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:84760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.818 [2024-11-06 14:33:12.401091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.818 [2024-11-06 14:33:12.401107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:84768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.819 [2024-11-06 14:33:12.401119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.819 [2024-11-06 14:33:12.401139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.819 [2024-11-06 14:33:12.401151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.819 [2024-11-06 14:33:12.401168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:84784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.819 [2024-11-06 14:33:12.401180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.819 [2024-11-06 14:33:12.401196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:84792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.819 [2024-11-06 14:33:12.401208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.819 [2024-11-06 14:33:12.401224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:84800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.819 [2024-11-06 14:33:12.401236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.819 [2024-11-06 14:33:12.401252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:84424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.819 [2024-11-06 14:33:12.401264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.819 [2024-11-06 14:33:12.401281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:84432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.819 [2024-11-06 14:33:12.401293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.819 [2024-11-06 14:33:12.401311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:84440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.819 [2024-11-06 14:33:12.401323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.819 [2024-11-06 14:33:12.401340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:84448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.819 [2024-11-06 14:33:12.401352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.819 [2024-11-06 14:33:12.401372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:84456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.819 [2024-11-06 14:33:12.401385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.819 [2024-11-06 14:33:12.401400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:84464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.819 [2024-11-06 14:33:12.401412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.819 [2024-11-06 14:33:12.401429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:84472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.819 [2024-11-06 14:33:12.401441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.819 [2024-11-06 14:33:12.401457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:84480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.819 [2024-11-06 14:33:12.401469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.819 [2024-11-06 14:33:12.401485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:84808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.819 [2024-11-06 14:33:12.401497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.819 [2024-11-06 14:33:12.401513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:84816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.819 [2024-11-06 14:33:12.401525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.819 [2024-11-06 14:33:12.401543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.819 [2024-11-06 14:33:12.401556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.819 [2024-11-06 14:33:12.401573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:84832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.819 [2024-11-06 14:33:12.401585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.820 [2024-11-06 14:33:12.401604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:84840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.820 [2024-11-06 14:33:12.401616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.820 [2024-11-06 14:33:12.401633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:84848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.820 [2024-11-06 14:33:12.401645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.820 [2024-11-06 14:33:12.401661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:84856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.820 [2024-11-06 14:33:12.401672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.820 [2024-11-06 14:33:12.401690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:84864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.820 [2024-11-06 14:33:12.401702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.820 [2024-11-06 14:33:12.401718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:84872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.820 [2024-11-06 14:33:12.401730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.820 [2024-11-06 14:33:12.401746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:84880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.820 [2024-11-06 14:33:12.401757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.820 [2024-11-06 14:33:12.401774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:84888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.820 [2024-11-06 14:33:12.401786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.820 [2024-11-06 14:33:12.401803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:84896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.820 [2024-11-06 14:33:12.401815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.820 [2024-11-06 14:33:12.401833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:84904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.820 [2024-11-06 14:33:12.401855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.820 [2024-11-06 14:33:12.401872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.820 [2024-11-06 14:33:12.401884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.820 [2024-11-06 14:33:12.401901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:84920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.820 [2024-11-06 14:33:12.401915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.820 [2024-11-06 14:33:12.401932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:84928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.820 [2024-11-06 14:33:12.401944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.820 [2024-11-06 14:33:12.401960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:84936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.820 [2024-11-06 14:33:12.401974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.820 [2024-11-06 14:33:12.401990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:84944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.820 [2024-11-06 14:33:12.402002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.820 [2024-11-06 14:33:12.402019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:84952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.820 [2024-11-06 14:33:12.402031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.820 [2024-11-06 14:33:12.402047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:84960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.820 [2024-11-06 14:33:12.402059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.820 [2024-11-06 14:33:12.402079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:84968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.820 [2024-11-06 14:33:12.402093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.820 [2024-11-06 14:33:12.402122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:84976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.820 [2024-11-06 14:33:12.402134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.821 [2024-11-06 14:33:12.402152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:84984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.821 [2024-11-06 14:33:12.402164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.821 [2024-11-06 14:33:12.402180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:84992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.821 [2024-11-06 14:33:12.402192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.821 [2024-11-06 14:33:12.402208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:84488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.821 [2024-11-06 14:33:12.402221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.821 [2024-11-06 14:33:12.402238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:84496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.821 [2024-11-06 14:33:12.402250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.821 [2024-11-06 14:33:12.402266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:84504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.821 [2024-11-06 14:33:12.402287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.821 [2024-11-06 14:33:12.402303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:84512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.821 [2024-11-06 14:33:12.402315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.821 [2024-11-06 14:33:12.402334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:84520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.821 [2024-11-06 14:33:12.402348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.821 [2024-11-06 14:33:12.402365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:84528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.821 [2024-11-06 14:33:12.402378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.821 [2024-11-06 14:33:12.402406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:84536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.821 [2024-11-06 14:33:12.402419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.821 [2024-11-06 14:33:12.402436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.821 [2024-11-06 14:33:12.402448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.821 [2024-11-06 14:33:12.402464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:85000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.821 [2024-11-06 14:33:12.402476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.821 [2024-11-06 14:33:12.402494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:85008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.821 [2024-11-06 14:33:12.402506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.821 [2024-11-06 14:33:12.402523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:85016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.821 [2024-11-06 14:33:12.402535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.821 [2024-11-06 14:33:12.402551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:85024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.821 [2024-11-06 14:33:12.402572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.821 [2024-11-06 14:33:12.402593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:85032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.821 [2024-11-06 14:33:12.402605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.821 [2024-11-06 14:33:12.402621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:85040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.821 [2024-11-06 14:33:12.402633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.821 [2024-11-06 14:33:12.402650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:85048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.821 [2024-11-06 14:33:12.402662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.821 [2024-11-06 14:33:12.402678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:85056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.822 [2024-11-06 14:33:12.402692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.822 [2024-11-06 14:33:12.402708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:85064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.822 [2024-11-06 14:33:12.402720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.822 [2024-11-06 14:33:12.402736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:85072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.822 [2024-11-06 14:33:12.402749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.822 [2024-11-06 14:33:12.402766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:85080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.822 [2024-11-06 14:33:12.402782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.822 [2024-11-06 14:33:12.402799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:85088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.822 [2024-11-06 14:33:12.402811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.822 [2024-11-06 14:33:12.402831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:85096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.822 [2024-11-06 14:33:12.402852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.822 [2024-11-06 14:33:12.402869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:85104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.822 [2024-11-06 14:33:12.402881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.822 [2024-11-06 14:33:12.402900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:85112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.822 [2024-11-06 14:33:12.402913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.822 [2024-11-06 14:33:12.402928] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b500 is same with the state(6) to be set 00:28:44.822 [2024-11-06 14:33:12.402946] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:44.822 [2024-11-06 14:33:12.402960] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:44.822 [2024-11-06 14:33:12.402973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85120 len:8 PRP1 0x0 PRP2 0x0 00:28:44.822 [2024-11-06 14:33:12.402990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.822 [2024-11-06 14:33:12.403396] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.822 [2024-11-06 14:33:12.403425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.822 [2024-11-06 14:33:12.403450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.822 [2024-11-06 14:33:12.403469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.822 [2024-11-06 14:33:12.403488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.822 [2024-11-06 14:33:12.403501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.822 [2024-11-06 14:33:12.403516] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.822 [2024-11-06 14:33:12.403528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.822 [2024-11-06 14:33:12.403543] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(6) to be set 00:28:44.822 [2024-11-06 14:33:12.403753] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.822 [2024-11-06 14:33:12.403790] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:28:44.822 [2024-11-06 14:33:12.403946] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.822 [2024-11-06 14:33:12.403969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.3, port=4420 00:28:44.822 [2024-11-06 14:33:12.403987] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(6) to be set 00:28:44.822 [2024-11-06 14:33:12.404010] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:28:44.822 [2024-11-06 14:33:12.404032] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.823 [2024-11-06 14:33:12.404045] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.823 [2024-11-06 14:33:12.404063] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.823 [2024-11-06 14:33:12.404081] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.823 [2024-11-06 14:33:12.404111] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.823 14:33:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:28:46.020 5256.50 IOPS, 20.53 MiB/s [2024-11-06T14:33:13.655Z] [2024-11-06 14:33:13.402717] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.020 [2024-11-06 14:33:13.402800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.3, port=4420 00:28:46.020 [2024-11-06 14:33:13.402824] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(6) to be set 00:28:46.020 [2024-11-06 14:33:13.402873] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:28:46.020 [2024-11-06 14:33:13.402916] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.020 [2024-11-06 14:33:13.402931] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.020 [2024-11-06 14:33:13.402953] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.020 [2024-11-06 14:33:13.402972] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.020 [2024-11-06 14:33:13.402992] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.020 14:33:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:28:46.020 [2024-11-06 14:33:13.617301] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:28:46.020 14:33:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 89264 00:28:46.958 3504.33 IOPS, 13.69 MiB/s [2024-11-06T14:33:14.593Z] [2024-11-06 14:33:14.416432] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:28:48.832 2628.25 IOPS, 10.27 MiB/s [2024-11-06T14:33:17.404Z] 3899.20 IOPS, 15.23 MiB/s [2024-11-06T14:33:18.340Z] 4958.33 IOPS, 19.37 MiB/s [2024-11-06T14:33:19.717Z] 5714.86 IOPS, 22.32 MiB/s [2024-11-06T14:33:20.285Z] 6274.50 IOPS, 24.51 MiB/s [2024-11-06T14:33:21.663Z] 6708.00 IOPS, 26.20 MiB/s [2024-11-06T14:33:21.663Z] 7048.40 IOPS, 27.53 MiB/s 00:28:54.028 Latency(us) 00:28:54.028 [2024-11-06T14:33:21.663Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:54.028 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:28:54.028 Verification LBA range: start 0x0 length 0x4000 00:28:54.028 NVMe0n1 : 10.01 7055.10 27.56 0.00 0.00 18107.78 1250.18 3018551.31 00:28:54.028 [2024-11-06T14:33:21.663Z] =================================================================================================================== 00:28:54.028 [2024-11-06T14:33:21.663Z] Total : 7055.10 27.56 0.00 0.00 18107.78 1250.18 3018551.31 00:28:54.028 { 00:28:54.028 "results": [ 00:28:54.028 { 00:28:54.028 "job": "NVMe0n1", 00:28:54.028 "core_mask": "0x4", 00:28:54.028 "workload": "verify", 00:28:54.028 "status": "finished", 00:28:54.028 "verify_range": { 00:28:54.028 "start": 0, 00:28:54.028 "length": 16384 00:28:54.028 }, 00:28:54.028 "queue_depth": 128, 00:28:54.028 "io_size": 4096, 00:28:54.028 "runtime": 10.008643, 00:28:54.028 "iops": 7055.102275103628, 00:28:54.028 "mibps": 27.558993262123547, 00:28:54.028 "io_failed": 0, 00:28:54.028 "io_timeout": 0, 00:28:54.028 "avg_latency_us": 18107.780822309232, 00:28:54.028 "min_latency_us": 1250.1847389558234, 00:28:54.028 "max_latency_us": 3018551.3124497994 00:28:54.028 } 00:28:54.028 ], 00:28:54.028 "core_count": 1 00:28:54.028 } 00:28:54.028 14:33:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=89369 00:28:54.028 14:33:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:54.028 14:33:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:28:54.028 Running I/O for 10 seconds... 00:28:55.010 14:33:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:28:55.010 9394.00 IOPS, 36.70 MiB/s [2024-11-06T14:33:22.645Z] [2024-11-06 14:33:22.508172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:83392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.010 [2024-11-06 14:33:22.508245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.010 [2024-11-06 14:33:22.508276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:83400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.010 [2024-11-06 14:33:22.508289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.010 [2024-11-06 14:33:22.508305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:83408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.010 [2024-11-06 14:33:22.508318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.010 [2024-11-06 14:33:22.508332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:83416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.010 [2024-11-06 14:33:22.508345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.010 [2024-11-06 14:33:22.508359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:83424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.010 [2024-11-06 14:33:22.508372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.010 [2024-11-06 14:33:22.508387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:83432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.010 [2024-11-06 14:33:22.508400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.010 [2024-11-06 14:33:22.508414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:83440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.010 [2024-11-06 14:33:22.508427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.010 [2024-11-06 14:33:22.508442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:83448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.010 [2024-11-06 14:33:22.508454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.010 [2024-11-06 14:33:22.508469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:82752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.010 [2024-11-06 14:33:22.508481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.010 [2024-11-06 14:33:22.508496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:82760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.010 [2024-11-06 14:33:22.508508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.010 [2024-11-06 14:33:22.508522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:82768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.010 [2024-11-06 14:33:22.508535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.010 [2024-11-06 14:33:22.508549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:82776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.010 [2024-11-06 14:33:22.508561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.010 [2024-11-06 14:33:22.508575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:82784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.010 [2024-11-06 14:33:22.508587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.010 [2024-11-06 14:33:22.508601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:82792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.010 [2024-11-06 14:33:22.508613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.011 [2024-11-06 14:33:22.508627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:82800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.011 [2024-11-06 14:33:22.508639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.011 [2024-11-06 14:33:22.508652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.011 [2024-11-06 14:33:22.508664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.011 [2024-11-06 14:33:22.508677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:82816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.011 [2024-11-06 14:33:22.508689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.011 [2024-11-06 14:33:22.508710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:82824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.011 [2024-11-06 14:33:22.508722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.011 [2024-11-06 14:33:22.508736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:82832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.011 [2024-11-06 14:33:22.508748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.011 [2024-11-06 14:33:22.508763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:82840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.011 [2024-11-06 14:33:22.508775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.011 [2024-11-06 14:33:22.508790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:82848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.011 [2024-11-06 14:33:22.508802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.011 [2024-11-06 14:33:22.508817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:82856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.011 [2024-11-06 14:33:22.508829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.011 [2024-11-06 14:33:22.508854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:82864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.011 [2024-11-06 14:33:22.508868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.011 [2024-11-06 14:33:22.508881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:82872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.011 [2024-11-06 14:33:22.508894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.011 [2024-11-06 14:33:22.508908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:82880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.011 [2024-11-06 14:33:22.508921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.011 [2024-11-06 14:33:22.508935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:82888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.011 [2024-11-06 14:33:22.508947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.011 [2024-11-06 14:33:22.508960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:82896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.011 [2024-11-06 14:33:22.508996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.011 [2024-11-06 14:33:22.509010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:82904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.011 [2024-11-06 14:33:22.509022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.011 [2024-11-06 14:33:22.509036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:82912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.011 [2024-11-06 14:33:22.509048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.011 [2024-11-06 14:33:22.509062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:82920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.011 [2024-11-06 14:33:22.509075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.011 [2024-11-06 14:33:22.509089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:82928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.011 [2024-11-06 14:33:22.509101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.011 [2024-11-06 14:33:22.509116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:82936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.011 [2024-11-06 14:33:22.509127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.011 [2024-11-06 14:33:22.509141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:83456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.011 [2024-11-06 14:33:22.509153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.011 [2024-11-06 14:33:22.509168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:83464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.011 [2024-11-06 14:33:22.509179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.011 [2024-11-06 14:33:22.509193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:83472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.011 [2024-11-06 14:33:22.509205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.011 [2024-11-06 14:33:22.509218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:83480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.011 [2024-11-06 14:33:22.509230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.011 [2024-11-06 14:33:22.509244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:83488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.011 [2024-11-06 14:33:22.509258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.011 [2024-11-06 14:33:22.509273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:83496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.011 [2024-11-06 14:33:22.509285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.011 [2024-11-06 14:33:22.509298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:83504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.011 [2024-11-06 14:33:22.509311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.011 [2024-11-06 14:33:22.509324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:83512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.011 [2024-11-06 14:33:22.509336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.011 [2024-11-06 14:33:22.509350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:82944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.011 [2024-11-06 14:33:22.509361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.011 [2024-11-06 14:33:22.509375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:82952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.011 [2024-11-06 14:33:22.509387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.011 [2024-11-06 14:33:22.509400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:82960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.011 [2024-11-06 14:33:22.509411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.011 [2024-11-06 14:33:22.509425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:82968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.011 [2024-11-06 14:33:22.509436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.011 [2024-11-06 14:33:22.509449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:82976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.011 [2024-11-06 14:33:22.509461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.011 [2024-11-06 14:33:22.509473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:82984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.011 [2024-11-06 14:33:22.509485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.011 [2024-11-06 14:33:22.509498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.011 [2024-11-06 14:33:22.509509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.011 [2024-11-06 14:33:22.509523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:83000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.011 [2024-11-06 14:33:22.509535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.011 [2024-11-06 14:33:22.509548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:83008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.011 [2024-11-06 14:33:22.509560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.011 [2024-11-06 14:33:22.509575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:83016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.011 [2024-11-06 14:33:22.509587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.011 [2024-11-06 14:33:22.509601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:83024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.012 [2024-11-06 14:33:22.509613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.012 [2024-11-06 14:33:22.509627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:83032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.012 [2024-11-06 14:33:22.509639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.012 [2024-11-06 14:33:22.509653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:83040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.012 [2024-11-06 14:33:22.509665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.012 [2024-11-06 14:33:22.509679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:83048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.012 [2024-11-06 14:33:22.509691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.012 [2024-11-06 14:33:22.509704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:83056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.012 [2024-11-06 14:33:22.509716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.012 [2024-11-06 14:33:22.509729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:83064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.012 [2024-11-06 14:33:22.509741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.012 [2024-11-06 14:33:22.509754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:83520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.012 [2024-11-06 14:33:22.509766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.012 [2024-11-06 14:33:22.509780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:83528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.012 [2024-11-06 14:33:22.509791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.012 [2024-11-06 14:33:22.509804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:83536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.012 [2024-11-06 14:33:22.509815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.012 [2024-11-06 14:33:22.509829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:83544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.012 [2024-11-06 14:33:22.509852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.012 [2024-11-06 14:33:22.509866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:83552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.012 [2024-11-06 14:33:22.509878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.012 [2024-11-06 14:33:22.509891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:83560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.012 [2024-11-06 14:33:22.509903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.012 [2024-11-06 14:33:22.509915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:83568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.012 [2024-11-06 14:33:22.509927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.012 [2024-11-06 14:33:22.509940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:83576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.012 [2024-11-06 14:33:22.509952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.012 [2024-11-06 14:33:22.509966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:83072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.012 [2024-11-06 14:33:22.509978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.012 [2024-11-06 14:33:22.509992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:83080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.012 [2024-11-06 14:33:22.510005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.012 [2024-11-06 14:33:22.510018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:83088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.012 [2024-11-06 14:33:22.510030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.012 [2024-11-06 14:33:22.510043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:83096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.012 [2024-11-06 14:33:22.510054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.012 [2024-11-06 14:33:22.510068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:83104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.012 [2024-11-06 14:33:22.510080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.012 [2024-11-06 14:33:22.510093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:83112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.012 [2024-11-06 14:33:22.510105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.012 [2024-11-06 14:33:22.510119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:83120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.012 [2024-11-06 14:33:22.510132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.012 [2024-11-06 14:33:22.510145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:83128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.012 [2024-11-06 14:33:22.510157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.012 [2024-11-06 14:33:22.510171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:83136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.012 [2024-11-06 14:33:22.510183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.012 [2024-11-06 14:33:22.510196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:83144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.012 [2024-11-06 14:33:22.510207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.012 [2024-11-06 14:33:22.510220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:83152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.012 [2024-11-06 14:33:22.510233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.012 [2024-11-06 14:33:22.510247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:83160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.012 [2024-11-06 14:33:22.510258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.012 [2024-11-06 14:33:22.510272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:83168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.012 [2024-11-06 14:33:22.510283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.012 [2024-11-06 14:33:22.510297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:83176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.012 [2024-11-06 14:33:22.510308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.012 [2024-11-06 14:33:22.510322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:83184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.012 [2024-11-06 14:33:22.510333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.012 [2024-11-06 14:33:22.510347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:83192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.012 [2024-11-06 14:33:22.510359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.012 [2024-11-06 14:33:22.510374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:83200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.012 [2024-11-06 14:33:22.510385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.012 [2024-11-06 14:33:22.510399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:83208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.012 [2024-11-06 14:33:22.510411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.012 [2024-11-06 14:33:22.510425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.012 [2024-11-06 14:33:22.510437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.012 [2024-11-06 14:33:22.510452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:83224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.012 [2024-11-06 14:33:22.510464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.012 [2024-11-06 14:33:22.510477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.012 [2024-11-06 14:33:22.510488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.012 [2024-11-06 14:33:22.510501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:83240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.012 [2024-11-06 14:33:22.510512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.012 [2024-11-06 14:33:22.510526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:83248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.012 [2024-11-06 14:33:22.510538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.012 [2024-11-06 14:33:22.510551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:83256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.012 [2024-11-06 14:33:22.510571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.012 [2024-11-06 14:33:22.510585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.012 [2024-11-06 14:33:22.510597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.012 [2024-11-06 14:33:22.510610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:83592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.012 [2024-11-06 14:33:22.510623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.013 [2024-11-06 14:33:22.510637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:83600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.013 [2024-11-06 14:33:22.510660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.013 [2024-11-06 14:33:22.510674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:83608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.013 [2024-11-06 14:33:22.510686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.013 [2024-11-06 14:33:22.510700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:83616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.013 [2024-11-06 14:33:22.510712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.013 [2024-11-06 14:33:22.510725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:83624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.013 [2024-11-06 14:33:22.510737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.013 [2024-11-06 14:33:22.510753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:83632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.013 [2024-11-06 14:33:22.510765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.013 [2024-11-06 14:33:22.510779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:83640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.013 [2024-11-06 14:33:22.510792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.013 [2024-11-06 14:33:22.510807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:83648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.013 [2024-11-06 14:33:22.510819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.013 [2024-11-06 14:33:22.510833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:83656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.013 [2024-11-06 14:33:22.510856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.013 [2024-11-06 14:33:22.510870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:83664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.013 [2024-11-06 14:33:22.510882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.013 [2024-11-06 14:33:22.510896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:83672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.013 [2024-11-06 14:33:22.510909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.013 [2024-11-06 14:33:22.510923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:83680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.013 [2024-11-06 14:33:22.510937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.013 [2024-11-06 14:33:22.510951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:83688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.013 [2024-11-06 14:33:22.510963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.013 [2024-11-06 14:33:22.510977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:83696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.013 [2024-11-06 14:33:22.510989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.013 [2024-11-06 14:33:22.511002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:83704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.013 [2024-11-06 14:33:22.511014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.013 [2024-11-06 14:33:22.511027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:83264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.013 [2024-11-06 14:33:22.511038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.013 [2024-11-06 14:33:22.511051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:83272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.013 [2024-11-06 14:33:22.511063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.013 [2024-11-06 14:33:22.511076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:83280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.013 [2024-11-06 14:33:22.511087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.013 [2024-11-06 14:33:22.511100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.013 [2024-11-06 14:33:22.511112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.013 [2024-11-06 14:33:22.511125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:83296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.013 [2024-11-06 14:33:22.511137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.013 [2024-11-06 14:33:22.511151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:83304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.013 [2024-11-06 14:33:22.511171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.013 [2024-11-06 14:33:22.511184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.013 [2024-11-06 14:33:22.511195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.013 [2024-11-06 14:33:22.511209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:83320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.013 [2024-11-06 14:33:22.511221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.013 [2024-11-06 14:33:22.511234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.013 [2024-11-06 14:33:22.511247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.013 [2024-11-06 14:33:22.511261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:83336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.013 [2024-11-06 14:33:22.511273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.013 [2024-11-06 14:33:22.511288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:83344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.013 [2024-11-06 14:33:22.511300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.013 [2024-11-06 14:33:22.511313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:83352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.013 [2024-11-06 14:33:22.511325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.013 [2024-11-06 14:33:22.511338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:83360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.013 [2024-11-06 14:33:22.511350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.013 [2024-11-06 14:33:22.511363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:83368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.013 [2024-11-06 14:33:22.511374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.013 [2024-11-06 14:33:22.511387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:83376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.013 [2024-11-06 14:33:22.511400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.013 [2024-11-06 14:33:22.511413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:83384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.013 [2024-11-06 14:33:22.511425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.013 [2024-11-06 14:33:22.511439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:83712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.013 [2024-11-06 14:33:22.511450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.013 [2024-11-06 14:33:22.511463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:83720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.013 [2024-11-06 14:33:22.511475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.013 [2024-11-06 14:33:22.511496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:83728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.013 [2024-11-06 14:33:22.511510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.013 [2024-11-06 14:33:22.511524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:83736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.013 [2024-11-06 14:33:22.511536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.013 [2024-11-06 14:33:22.511550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:83744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.013 [2024-11-06 14:33:22.511563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.013 [2024-11-06 14:33:22.511576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:83752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.013 [2024-11-06 14:33:22.511590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.013 [2024-11-06 14:33:22.511604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:55.013 [2024-11-06 14:33:22.511617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.013 [2024-11-06 14:33:22.511629] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002bc80 is same with the state(6) to be set 00:28:55.013 [2024-11-06 14:33:22.511646] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:55.013 [2024-11-06 14:33:22.511656] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:55.013 [2024-11-06 14:33:22.511669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83768 len:8 PRP1 0x0 PRP2 0x0 00:28:55.013 [2024-11-06 14:33:22.511681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:55.013 [2024-11-06 14:33:22.512211] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:28:55.014 [2024-11-06 14:33:22.512314] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:28:55.014 [2024-11-06 14:33:22.512434] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.014 [2024-11-06 14:33:22.512455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.3, port=4420 00:28:55.014 [2024-11-06 14:33:22.512471] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(6) to be set 00:28:55.014 [2024-11-06 14:33:22.512493] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:28:55.014 [2024-11-06 14:33:22.512512] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:28:55.014 [2024-11-06 14:33:22.512525] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:28:55.014 [2024-11-06 14:33:22.512539] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:28:55.014 [2024-11-06 14:33:22.512554] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:28:55.014 [2024-11-06 14:33:22.512569] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:28:55.014 14:33:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:28:55.952 5172.00 IOPS, 20.20 MiB/s [2024-11-06T14:33:23.587Z] [2024-11-06 14:33:23.511170] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.952 [2024-11-06 14:33:23.511261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.3, port=4420 00:28:55.952 [2024-11-06 14:33:23.511281] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(6) to be set 00:28:55.952 [2024-11-06 14:33:23.511320] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:28:55.952 [2024-11-06 14:33:23.511357] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:28:55.952 [2024-11-06 14:33:23.511372] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:28:55.952 [2024-11-06 14:33:23.511388] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:28:55.952 [2024-11-06 14:33:23.511406] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:28:55.952 [2024-11-06 14:33:23.511422] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:28:56.889 3448.00 IOPS, 13.47 MiB/s [2024-11-06T14:33:24.524Z] [2024-11-06 14:33:24.510011] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.889 [2024-11-06 14:33:24.510100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.3, port=4420 00:28:56.889 [2024-11-06 14:33:24.510120] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(6) to be set 00:28:56.889 [2024-11-06 14:33:24.510159] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:28:56.889 [2024-11-06 14:33:24.510185] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:28:56.889 [2024-11-06 14:33:24.510200] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:28:56.890 [2024-11-06 14:33:24.510218] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:28:56.890 [2024-11-06 14:33:24.510237] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:28:56.890 [2024-11-06 14:33:24.510254] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:28:58.085 2586.00 IOPS, 10.10 MiB/s [2024-11-06T14:33:25.720Z] [2024-11-06 14:33:25.511231] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.085 [2024-11-06 14:33:25.511312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.3, port=4420 00:28:58.086 [2024-11-06 14:33:25.511333] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(6) to be set 00:28:58.086 [2024-11-06 14:33:25.511568] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:28:58.086 [2024-11-06 14:33:25.511794] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:28:58.086 [2024-11-06 14:33:25.511818] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:28:58.086 [2024-11-06 14:33:25.511834] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:28:58.086 [2024-11-06 14:33:25.511867] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:28:58.086 [2024-11-06 14:33:25.511884] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:28:58.086 14:33:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:28:58.345 [2024-11-06 14:33:25.732344] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:28:58.345 14:33:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 89369 00:28:58.913 2068.80 IOPS, 8.08 MiB/s [2024-11-06T14:33:26.548Z] [2024-11-06 14:33:26.537364] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 4] Resetting controller successful. 00:29:00.810 3180.67 IOPS, 12.42 MiB/s [2024-11-06T14:33:29.825Z] 4174.57 IOPS, 16.31 MiB/s [2024-11-06T14:33:30.769Z] 4929.50 IOPS, 19.26 MiB/s [2024-11-06T14:33:31.706Z] 5519.56 IOPS, 21.56 MiB/s [2024-11-06T14:33:31.706Z] 5991.60 IOPS, 23.40 MiB/s 00:29:04.071 Latency(us) 00:29:04.071 [2024-11-06T14:33:31.706Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:04.071 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:29:04.071 Verification LBA range: start 0x0 length 0x4000 00:29:04.071 NVMe0n1 : 10.01 5998.60 23.43 4707.25 0.00 11932.54 569.16 3018551.31 00:29:04.071 [2024-11-06T14:33:31.706Z] =================================================================================================================== 00:29:04.071 [2024-11-06T14:33:31.706Z] Total : 5998.60 23.43 4707.25 0.00 11932.54 0.00 3018551.31 00:29:04.071 { 00:29:04.071 "results": [ 00:29:04.071 { 00:29:04.071 "job": "NVMe0n1", 00:29:04.071 "core_mask": "0x4", 00:29:04.071 "workload": "verify", 00:29:04.071 "status": "finished", 00:29:04.071 "verify_range": { 00:29:04.071 "start": 0, 00:29:04.071 "length": 16384 00:29:04.071 }, 00:29:04.071 "queue_depth": 128, 00:29:04.071 "io_size": 4096, 00:29:04.071 "runtime": 10.009672, 00:29:04.071 "iops": 5998.598155863649, 00:29:04.071 "mibps": 23.432024046342377, 00:29:04.071 "io_failed": 47118, 00:29:04.071 "io_timeout": 0, 00:29:04.071 "avg_latency_us": 11932.536252008651, 00:29:04.071 "min_latency_us": 569.1630522088353, 00:29:04.071 "max_latency_us": 3018551.3124497994 00:29:04.071 } 00:29:04.071 ], 00:29:04.071 "core_count": 1 00:29:04.071 } 00:29:04.071 14:33:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 89246 00:29:04.071 14:33:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@952 -- # '[' -z 89246 ']' 00:29:04.071 14:33:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # kill -0 89246 00:29:04.071 14:33:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # uname 00:29:04.071 14:33:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:04.071 14:33:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 89246 00:29:04.071 killing process with pid 89246 00:29:04.071 Received shutdown signal, test time was about 10.000000 seconds 00:29:04.071 00:29:04.071 Latency(us) 00:29:04.071 [2024-11-06T14:33:31.706Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:04.071 [2024-11-06T14:33:31.706Z] =================================================================================================================== 00:29:04.071 [2024-11-06T14:33:31.706Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:04.071 14:33:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:29:04.071 14:33:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:29:04.071 14:33:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@970 -- # echo 'killing process with pid 89246' 00:29:04.071 14:33:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@971 -- # kill 89246 00:29:04.071 14:33:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@976 -- # wait 89246 00:29:05.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:05.012 14:33:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:29:05.012 14:33:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=89490 00:29:05.012 14:33:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 89490 /var/tmp/bdevperf.sock 00:29:05.012 14:33:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@833 -- # '[' -z 89490 ']' 00:29:05.012 14:33:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:05.012 14:33:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:05.012 14:33:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:05.012 14:33:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:05.012 14:33:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:05.012 [2024-11-06 14:33:32.562892] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:29:05.012 [2024-11-06 14:33:32.563019] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89490 ] 00:29:05.272 [2024-11-06 14:33:32.743861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:05.272 [2024-11-06 14:33:32.861559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:05.532 [2024-11-06 14:33:33.070343] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:29:05.791 14:33:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:05.791 14:33:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@866 -- # return 0 00:29:05.791 14:33:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 89490 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:29:05.791 14:33:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=89506 00:29:05.791 14:33:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:29:06.050 14:33:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:29:06.310 NVMe0n1 00:29:06.569 14:33:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=89546 00:29:06.569 14:33:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:06.569 14:33:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:29:06.569 Running I/O for 10 seconds... 00:29:07.507 14:33:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:29:07.770 16256.00 IOPS, 63.50 MiB/s [2024-11-06T14:33:35.405Z] [2024-11-06 14:33:35.161020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:07.770 [2024-11-06 14:33:35.161088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:07.770 [2024-11-06 14:33:35.161102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:07.770 [2024-11-06 14:33:35.161115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:07.770 [2024-11-06 14:33:35.161126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:07.770 [2024-11-06 14:33:35.161139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:07.770 [2024-11-06 14:33:35.161149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:07.770 [2024-11-06 14:33:35.161161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:07.770 [2024-11-06 14:33:35.161171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:07.770 [2024-11-06 14:33:35.161183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:07.770 [2024-11-06 14:33:35.161194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:07.770 [2024-11-06 14:33:35.161207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:07.770 [2024-11-06 14:33:35.161217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:07.770 [2024-11-06 14:33:35.161250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:07.770 [2024-11-06 14:33:35.161261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:07.770 [2024-11-06 14:33:35.161274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:07.770 [2024-11-06 14:33:35.161284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:07.770 [2024-11-06 14:33:35.161296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:07.770 [2024-11-06 14:33:35.161306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:07.770 [2024-11-06 14:33:35.161319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:07.770 [2024-11-06 14:33:35.161329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:07.770 [2024-11-06 14:33:35.161341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:07.770 [2024-11-06 14:33:35.161352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:07.770 [2024-11-06 14:33:35.161364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:07.770 [2024-11-06 14:33:35.161374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:07.770 [2024-11-06 14:33:35.161386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:07.770 [2024-11-06 14:33:35.161396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:07.770 [2024-11-06 14:33:35.161408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:07.770 [2024-11-06 14:33:35.161418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:07.770 [2024-11-06 14:33:35.161433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:07.770 [2024-11-06 14:33:35.161443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:07.770 [2024-11-06 14:33:35.161455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:07.770 [2024-11-06 14:33:35.161466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:07.770 [2024-11-06 14:33:35.161478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:07.770 [2024-11-06 14:33:35.161490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:07.770 [2024-11-06 14:33:35.161503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:07.770 [2024-11-06 14:33:35.161513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:07.770 [2024-11-06 14:33:35.161526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:07.770 [2024-11-06 14:33:35.161536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:07.770 [2024-11-06 14:33:35.161550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:07.770 [2024-11-06 14:33:35.161560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:07.770 [2024-11-06 14:33:35.161573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:07.770 [2024-11-06 14:33:35.161583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:07.770 [2024-11-06 14:33:35.161595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:07.770 [2024-11-06 14:33:35.161605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:07.770 [2024-11-06 14:33:35.161621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:07.770 [2024-11-06 14:33:35.161631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:07.770 [2024-11-06 14:33:35.161643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:07.770 [2024-11-06 14:33:35.161653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:07.770 [2024-11-06 14:33:35.161665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:07.770 [2024-11-06 14:33:35.161675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:07.770 [2024-11-06 14:33:35.161687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:07.770 [2024-11-06 14:33:35.161697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:07.770 [2024-11-06 14:33:35.161709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:07.770 [2024-11-06 14:33:35.161718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:07.770 [2024-11-06 14:33:35.161731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:07.770 [2024-11-06 14:33:35.161740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:07.770 [2024-11-06 14:33:35.161752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:07.770 [2024-11-06 14:33:35.161762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:07.770 [2024-11-06 14:33:35.161774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:07.771 [2024-11-06 14:33:35.161877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:52048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.771 [2024-11-06 14:33:35.161933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.771 [2024-11-06 14:33:35.161968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:4928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.771 [2024-11-06 14:33:35.161981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.771 [2024-11-06 14:33:35.161999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:7912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.771 [2024-11-06 14:33:35.162011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.771 [2024-11-06 14:33:35.162028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:59048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.771 [2024-11-06 14:33:35.162041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.771 [2024-11-06 14:33:35.162057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:77600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.771 [2024-11-06 14:33:35.162070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.771 [2024-11-06 14:33:35.162090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:3040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.771 [2024-11-06 14:33:35.162102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.771 [2024-11-06 14:33:35.162123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:11008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.771 [2024-11-06 14:33:35.162135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.771 [2024-11-06 14:33:35.162152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:91568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.771 [2024-11-06 14:33:35.162165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.771 [2024-11-06 14:33:35.162182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:117560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.771 [2024-11-06 14:33:35.162194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.771 [2024-11-06 14:33:35.162211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:127904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.771 [2024-11-06 14:33:35.162223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.771 [2024-11-06 14:33:35.162239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:81512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.771 [2024-11-06 14:33:35.162251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.771 [2024-11-06 14:33:35.162267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:121528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.771 [2024-11-06 14:33:35.162278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.771 [2024-11-06 14:33:35.162295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:126448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.771 [2024-11-06 14:33:35.162308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.771 [2024-11-06 14:33:35.162325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:107968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.771 [2024-11-06 14:33:35.162337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.771 [2024-11-06 14:33:35.162356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:31288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.771 [2024-11-06 14:33:35.162368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.771 [2024-11-06 14:33:35.162385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:56784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.771 [2024-11-06 14:33:35.162397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.771 [2024-11-06 14:33:35.162414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:110400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.771 [2024-11-06 14:33:35.162428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.771 [2024-11-06 14:33:35.162444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:121600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.771 [2024-11-06 14:33:35.162456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.771 [2024-11-06 14:33:35.162475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:12568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.771 [2024-11-06 14:33:35.162487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.771 [2024-11-06 14:33:35.162503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:27784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.771 [2024-11-06 14:33:35.162515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.771 [2024-11-06 14:33:35.162532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:57472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.771 [2024-11-06 14:33:35.162544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.771 [2024-11-06 14:33:35.162559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:22944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.771 [2024-11-06 14:33:35.162581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.771 [2024-11-06 14:33:35.162600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:120624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.771 [2024-11-06 14:33:35.162612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.771 [2024-11-06 14:33:35.162628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:55760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.771 [2024-11-06 14:33:35.162640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.771 [2024-11-06 14:33:35.162656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:89232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.771 [2024-11-06 14:33:35.162667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.771 [2024-11-06 14:33:35.162684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:1392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.771 [2024-11-06 14:33:35.162696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.771 [2024-11-06 14:33:35.162713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:102944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.771 [2024-11-06 14:33:35.162725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.771 [2024-11-06 14:33:35.162742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:40504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.771 [2024-11-06 14:33:35.162754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.771 [2024-11-06 14:33:35.162769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:55264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.771 [2024-11-06 14:33:35.162781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.771 [2024-11-06 14:33:35.162797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:28584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.771 [2024-11-06 14:33:35.162808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.771 [2024-11-06 14:33:35.162828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:43272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.771 [2024-11-06 14:33:35.162849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.771 [2024-11-06 14:33:35.162869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:1976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.771 [2024-11-06 14:33:35.162887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.771 [2024-11-06 14:33:35.162907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:77896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.771 [2024-11-06 14:33:35.162919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.771 [2024-11-06 14:33:35.162937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:50976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.771 [2024-11-06 14:33:35.162949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.771 [2024-11-06 14:33:35.162966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:119944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.771 [2024-11-06 14:33:35.162978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.771 [2024-11-06 14:33:35.162994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:68064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.771 [2024-11-06 14:33:35.163006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.771 [2024-11-06 14:33:35.163022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:15672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.771 [2024-11-06 14:33:35.163034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.771 [2024-11-06 14:33:35.163051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:26624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.771 [2024-11-06 14:33:35.163062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.771 [2024-11-06 14:33:35.163081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:110288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.771 [2024-11-06 14:33:35.163093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.771 [2024-11-06 14:33:35.163109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:36280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.771 [2024-11-06 14:33:35.163120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.772 [2024-11-06 14:33:35.163136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:114560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.772 [2024-11-06 14:33:35.163147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.772 [2024-11-06 14:33:35.163163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:52256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.772 [2024-11-06 14:33:35.163178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.772 [2024-11-06 14:33:35.163193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:91016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.772 [2024-11-06 14:33:35.163205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.772 [2024-11-06 14:33:35.163221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:49592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.772 [2024-11-06 14:33:35.163233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.772 [2024-11-06 14:33:35.163250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:94624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.772 [2024-11-06 14:33:35.163262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.772 [2024-11-06 14:33:35.163279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:64200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.772 [2024-11-06 14:33:35.163292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.772 [2024-11-06 14:33:35.163326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:90048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.772 [2024-11-06 14:33:35.163338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.772 [2024-11-06 14:33:35.163355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:109032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.772 [2024-11-06 14:33:35.163368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.772 [2024-11-06 14:33:35.163386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:96104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.772 [2024-11-06 14:33:35.163399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.772 [2024-11-06 14:33:35.163415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:30880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.772 [2024-11-06 14:33:35.163427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.772 [2024-11-06 14:33:35.163445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:41520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.772 [2024-11-06 14:33:35.163457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.772 [2024-11-06 14:33:35.163473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:102344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.772 [2024-11-06 14:33:35.163485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.772 [2024-11-06 14:33:35.163502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:77376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.772 [2024-11-06 14:33:35.163513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.772 [2024-11-06 14:33:35.163529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:38928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.772 [2024-11-06 14:33:35.163540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.772 [2024-11-06 14:33:35.163559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:125672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.772 [2024-11-06 14:33:35.163570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.772 [2024-11-06 14:33:35.163585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:62056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.772 [2024-11-06 14:33:35.163597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.772 [2024-11-06 14:33:35.163612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:48400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.772 [2024-11-06 14:33:35.163624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.772 [2024-11-06 14:33:35.163641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:26816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.772 [2024-11-06 14:33:35.163653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.772 [2024-11-06 14:33:35.163669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:2984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.772 [2024-11-06 14:33:35.163681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.772 [2024-11-06 14:33:35.163697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:30280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.772 [2024-11-06 14:33:35.163709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.772 [2024-11-06 14:33:35.163724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:1080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.772 [2024-11-06 14:33:35.163736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.772 [2024-11-06 14:33:35.163752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:108544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.772 [2024-11-06 14:33:35.163763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.772 [2024-11-06 14:33:35.163782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:14200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.772 [2024-11-06 14:33:35.163794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.772 [2024-11-06 14:33:35.163810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:111968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.772 [2024-11-06 14:33:35.163825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.772 [2024-11-06 14:33:35.163854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:19872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.772 [2024-11-06 14:33:35.163867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.772 [2024-11-06 14:33:35.163884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:94360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.772 [2024-11-06 14:33:35.163896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.772 [2024-11-06 14:33:35.163913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:29448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.772 [2024-11-06 14:33:35.163924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.772 [2024-11-06 14:33:35.163941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:85816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.772 [2024-11-06 14:33:35.163953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.772 [2024-11-06 14:33:35.163969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:130208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.772 [2024-11-06 14:33:35.163981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.772 [2024-11-06 14:33:35.163999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:60096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.772 [2024-11-06 14:33:35.164011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.772 [2024-11-06 14:33:35.164031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:17528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.772 [2024-11-06 14:33:35.164044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.772 [2024-11-06 14:33:35.164060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:121056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.772 [2024-11-06 14:33:35.164071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.772 [2024-11-06 14:33:35.164087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:56880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.772 [2024-11-06 14:33:35.164098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.772 [2024-11-06 14:33:35.164115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:89152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.772 [2024-11-06 14:33:35.164126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.772 [2024-11-06 14:33:35.164142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:17272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.772 [2024-11-06 14:33:35.164153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.772 [2024-11-06 14:33:35.164169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:93392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.772 [2024-11-06 14:33:35.164180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.772 [2024-11-06 14:33:35.164197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:129184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.772 [2024-11-06 14:33:35.164209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.772 [2024-11-06 14:33:35.164224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.772 [2024-11-06 14:33:35.164236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.772 [2024-11-06 14:33:35.164255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:76064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.772 [2024-11-06 14:33:35.164266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.772 [2024-11-06 14:33:35.164282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:6936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.772 [2024-11-06 14:33:35.164299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.773 [2024-11-06 14:33:35.164314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:7136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.773 [2024-11-06 14:33:35.164326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.773 [2024-11-06 14:33:35.164341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:69608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.773 [2024-11-06 14:33:35.164353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.773 [2024-11-06 14:33:35.164368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:114712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.773 [2024-11-06 14:33:35.164379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.773 [2024-11-06 14:33:35.164397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:39216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.773 [2024-11-06 14:33:35.164408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.773 [2024-11-06 14:33:35.164426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:45408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.773 [2024-11-06 14:33:35.164438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.773 [2024-11-06 14:33:35.164454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:57616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.773 [2024-11-06 14:33:35.164466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.773 [2024-11-06 14:33:35.164485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:70784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.773 [2024-11-06 14:33:35.164496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.773 [2024-11-06 14:33:35.164512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.773 [2024-11-06 14:33:35.164523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.773 [2024-11-06 14:33:35.164539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:76768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.773 [2024-11-06 14:33:35.164551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.773 [2024-11-06 14:33:35.164567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:75688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.773 [2024-11-06 14:33:35.164579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.773 [2024-11-06 14:33:35.164595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:99336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.773 [2024-11-06 14:33:35.164607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.773 [2024-11-06 14:33:35.164623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:120432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.773 [2024-11-06 14:33:35.164635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.773 [2024-11-06 14:33:35.164651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:120800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.773 [2024-11-06 14:33:35.164662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.773 [2024-11-06 14:33:35.164678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:68840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.773 [2024-11-06 14:33:35.164689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.773 [2024-11-06 14:33:35.164708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:11712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.773 [2024-11-06 14:33:35.164719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.773 [2024-11-06 14:33:35.164735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:68632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.773 [2024-11-06 14:33:35.164752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.773 [2024-11-06 14:33:35.164768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:35656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.773 [2024-11-06 14:33:35.164780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.773 [2024-11-06 14:33:35.164798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:120736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.773 [2024-11-06 14:33:35.164810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.773 [2024-11-06 14:33:35.164827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:8800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.773 [2024-11-06 14:33:35.164847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.773 [2024-11-06 14:33:35.164865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:89768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.773 [2024-11-06 14:33:35.164876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.773 [2024-11-06 14:33:35.164892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:46440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.773 [2024-11-06 14:33:35.164904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.773 [2024-11-06 14:33:35.164920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:64528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.773 [2024-11-06 14:33:35.164933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.773 [2024-11-06 14:33:35.164953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:111616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.773 [2024-11-06 14:33:35.164965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.773 [2024-11-06 14:33:35.164982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:61216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.773 [2024-11-06 14:33:35.164993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.773 [2024-11-06 14:33:35.165010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:28464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.773 [2024-11-06 14:33:35.165021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.773 [2024-11-06 14:33:35.165037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:125944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.773 [2024-11-06 14:33:35.165049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.773 [2024-11-06 14:33:35.165082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:1600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.773 [2024-11-06 14:33:35.165094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.773 [2024-11-06 14:33:35.165109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:115720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.773 [2024-11-06 14:33:35.165120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.773 [2024-11-06 14:33:35.165136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:101120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.773 [2024-11-06 14:33:35.165148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.773 [2024-11-06 14:33:35.165164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:108184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.773 [2024-11-06 14:33:35.165176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.773 [2024-11-06 14:33:35.165206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:114256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.773 [2024-11-06 14:33:35.165218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.773 [2024-11-06 14:33:35.165234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:39264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.773 [2024-11-06 14:33:35.165250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.773 [2024-11-06 14:33:35.165266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:99520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.773 [2024-11-06 14:33:35.165277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.773 [2024-11-06 14:33:35.165293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:92360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.773 [2024-11-06 14:33:35.165304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.773 [2024-11-06 14:33:35.165320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:111320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.773 [2024-11-06 14:33:35.165331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.773 [2024-11-06 14:33:35.165346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:110760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.773 [2024-11-06 14:33:35.165358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.773 [2024-11-06 14:33:35.165373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:37320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.773 [2024-11-06 14:33:35.165385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.773 [2024-11-06 14:33:35.165400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:72880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.773 [2024-11-06 14:33:35.165412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.773 [2024-11-06 14:33:35.165430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:48104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.773 [2024-11-06 14:33:35.165442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.773 [2024-11-06 14:33:35.165458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:73960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.774 [2024-11-06 14:33:35.165469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.774 [2024-11-06 14:33:35.165485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.774 [2024-11-06 14:33:35.165496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.774 [2024-11-06 14:33:35.165511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:100904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.774 [2024-11-06 14:33:35.165522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.774 [2024-11-06 14:33:35.165538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:115512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.774 [2024-11-06 14:33:35.165550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.774 [2024-11-06 14:33:35.165567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:83624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.774 [2024-11-06 14:33:35.165579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.774 [2024-11-06 14:33:35.165594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:47848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.774 [2024-11-06 14:33:35.165606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.774 [2024-11-06 14:33:35.165622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.774 [2024-11-06 14:33:35.165634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.774 [2024-11-06 14:33:35.165652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.774 [2024-11-06 14:33:35.165663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.774 [2024-11-06 14:33:35.165678] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b280 is same with the state(6) to be set 00:29:07.774 [2024-11-06 14:33:35.165701] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:07.774 [2024-11-06 14:33:35.165715] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:07.774 [2024-11-06 14:33:35.165727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:37872 len:8 PRP1 0x0 PRP2 0x0 00:29:07.774 [2024-11-06 14:33:35.165743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.774 [2024-11-06 14:33:35.166304] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:29:07.774 [2024-11-06 14:33:35.166413] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:29:07.774 [2024-11-06 14:33:35.166574] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.774 [2024-11-06 14:33:35.166600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.3, port=4420 00:29:07.774 [2024-11-06 14:33:35.166617] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:29:07.774 [2024-11-06 14:33:35.166647] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:29:07.774 [2024-11-06 14:33:35.166670] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:29:07.774 [2024-11-06 14:33:35.166684] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:29:07.774 [2024-11-06 14:33:35.166705] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:29:07.774 [2024-11-06 14:33:35.166720] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:29:07.774 [2024-11-06 14:33:35.166737] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:29:07.774 14:33:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 89546 00:29:09.649 8923.00 IOPS, 34.86 MiB/s [2024-11-06T14:33:37.284Z] 5948.67 IOPS, 23.24 MiB/s [2024-11-06T14:33:37.284Z] [2024-11-06 14:33:37.163731] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.649 [2024-11-06 14:33:37.163813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.3, port=4420 00:29:09.649 [2024-11-06 14:33:37.163853] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:29:09.649 [2024-11-06 14:33:37.163888] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:29:09.649 [2024-11-06 14:33:37.163916] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:29:09.649 [2024-11-06 14:33:37.163931] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:29:09.649 [2024-11-06 14:33:37.163950] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:29:09.649 [2024-11-06 14:33:37.163967] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:29:09.649 [2024-11-06 14:33:37.163986] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:29:11.526 4461.50 IOPS, 17.43 MiB/s [2024-11-06T14:33:39.421Z] 3569.20 IOPS, 13.94 MiB/s [2024-11-06T14:33:39.421Z] [2024-11-06 14:33:39.161008] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.786 [2024-11-06 14:33:39.161093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.3, port=4420 00:29:11.786 [2024-11-06 14:33:39.161123] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:29:11.786 [2024-11-06 14:33:39.161159] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:29:11.786 [2024-11-06 14:33:39.161187] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:29:11.786 [2024-11-06 14:33:39.161201] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:29:11.786 [2024-11-06 14:33:39.161220] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:29:11.786 [2024-11-06 14:33:39.161237] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:29:11.786 [2024-11-06 14:33:39.161256] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:29:13.659 2974.33 IOPS, 11.62 MiB/s [2024-11-06T14:33:41.294Z] 2549.43 IOPS, 9.96 MiB/s [2024-11-06T14:33:41.294Z] [2024-11-06 14:33:41.158115] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:29:13.659 [2024-11-06 14:33:41.158196] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:29:13.659 [2024-11-06 14:33:41.158212] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:29:13.659 [2024-11-06 14:33:41.158231] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] already in failed state 00:29:13.659 [2024-11-06 14:33:41.158251] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:29:14.596 2230.75 IOPS, 8.71 MiB/s 00:29:14.597 Latency(us) 00:29:14.597 [2024-11-06T14:33:42.232Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:14.597 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:29:14.597 NVMe0n1 : 8.11 2201.23 8.60 15.79 0.00 57895.50 7632.71 7061253.96 00:29:14.597 [2024-11-06T14:33:42.232Z] =================================================================================================================== 00:29:14.597 [2024-11-06T14:33:42.232Z] Total : 2201.23 8.60 15.79 0.00 57895.50 7632.71 7061253.96 00:29:14.597 { 00:29:14.597 "results": [ 00:29:14.597 { 00:29:14.597 "job": "NVMe0n1", 00:29:14.597 "core_mask": "0x4", 00:29:14.597 "workload": "randread", 00:29:14.597 "status": "finished", 00:29:14.597 "queue_depth": 128, 00:29:14.597 "io_size": 4096, 00:29:14.597 "runtime": 8.107289, 00:29:14.597 "iops": 2201.229042161936, 00:29:14.597 "mibps": 8.598550945945062, 00:29:14.597 "io_failed": 128, 00:29:14.597 "io_timeout": 0, 00:29:14.597 "avg_latency_us": 57895.500064305284, 00:29:14.597 "min_latency_us": 7632.706827309237, 00:29:14.597 "max_latency_us": 7061253.963052209 00:29:14.597 } 00:29:14.597 ], 00:29:14.597 "core_count": 1 00:29:14.597 } 00:29:14.597 14:33:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:14.597 Attaching 5 probes... 00:29:14.597 1221.531410: reset bdev controller NVMe0 00:29:14.597 1221.711409: reconnect bdev controller NVMe0 00:29:14.597 3218.798137: reconnect delay bdev controller NVMe0 00:29:14.597 3218.823963: reconnect bdev controller NVMe0 00:29:14.597 5216.063510: reconnect delay bdev controller NVMe0 00:29:14.597 5216.087840: reconnect bdev controller NVMe0 00:29:14.597 7213.324122: reconnect delay bdev controller NVMe0 00:29:14.597 7213.348763: reconnect bdev controller NVMe0 00:29:14.597 14:33:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:29:14.597 14:33:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:29:14.597 14:33:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 89506 00:29:14.597 14:33:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:14.597 14:33:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 89490 00:29:14.597 14:33:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@952 -- # '[' -z 89490 ']' 00:29:14.597 14:33:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # kill -0 89490 00:29:14.597 14:33:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # uname 00:29:14.597 14:33:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:14.597 14:33:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 89490 00:29:14.854 14:33:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:29:14.854 14:33:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:29:14.854 killing process with pid 89490 00:29:14.854 14:33:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@970 -- # echo 'killing process with pid 89490' 00:29:14.854 14:33:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@971 -- # kill 89490 00:29:14.854 Received shutdown signal, test time was about 8.201398 seconds 00:29:14.854 00:29:14.854 Latency(us) 00:29:14.854 [2024-11-06T14:33:42.489Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:14.854 [2024-11-06T14:33:42.489Z] =================================================================================================================== 00:29:14.854 [2024-11-06T14:33:42.489Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:14.854 14:33:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@976 -- # wait 89490 00:29:15.798 14:33:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:16.077 14:33:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:29:16.077 14:33:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:29:16.077 14:33:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:16.077 14:33:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 00:29:16.077 14:33:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:16.077 14:33:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 00:29:16.077 14:33:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:16.077 14:33:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:16.077 rmmod nvme_tcp 00:29:16.077 rmmod nvme_fabrics 00:29:16.077 rmmod nvme_keyring 00:29:16.077 14:33:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:16.077 14:33:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 00:29:16.077 14:33:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 00:29:16.077 14:33:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@517 -- # '[' -n 89044 ']' 00:29:16.077 14:33:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@518 -- # killprocess 89044 00:29:16.077 14:33:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@952 -- # '[' -z 89044 ']' 00:29:16.077 14:33:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # kill -0 89044 00:29:16.077 14:33:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # uname 00:29:16.336 14:33:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:16.336 14:33:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 89044 00:29:16.336 14:33:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:16.336 killing process with pid 89044 00:29:16.336 14:33:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:16.336 14:33:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@970 -- # echo 'killing process with pid 89044' 00:29:16.336 14:33:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@971 -- # kill 89044 00:29:16.336 14:33:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@976 -- # wait 89044 00:29:17.715 14:33:45 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:17.715 14:33:45 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:17.715 14:33:45 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:17.715 14:33:45 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 00:29:17.715 14:33:45 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:29:17.715 14:33:45 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-save 00:29:17.715 14:33:45 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:17.715 14:33:45 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:17.715 14:33:45 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:29:17.715 14:33:45 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:29:17.715 14:33:45 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:29:17.715 14:33:45 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:29:17.715 14:33:45 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:29:17.715 14:33:45 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:29:17.715 14:33:45 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:29:17.715 14:33:45 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:29:17.715 14:33:45 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:29:17.715 14:33:45 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:29:17.974 14:33:45 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:29:17.974 14:33:45 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:29:17.974 14:33:45 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:17.974 14:33:45 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:17.974 14:33:45 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:29:17.974 14:33:45 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:17.974 14:33:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:17.975 14:33:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:17.975 14:33:45 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 00:29:17.975 ************************************ 00:29:17.975 END TEST nvmf_timeout 00:29:17.975 ************************************ 00:29:17.975 00:29:17.975 real 0m50.459s 00:29:17.975 user 2m22.606s 00:29:17.975 sys 0m7.645s 00:29:17.975 14:33:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:17.975 14:33:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:17.975 14:33:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:29:17.975 14:33:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:29:17.975 ************************************ 00:29:17.975 END TEST nvmf_host 00:29:17.975 ************************************ 00:29:17.975 00:29:17.975 real 6m26.714s 00:29:17.975 user 17m8.744s 00:29:17.975 sys 1m37.945s 00:29:17.975 14:33:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:17.975 14:33:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.234 14:33:45 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:29:18.234 14:33:45 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 1 -eq 0 ]] 00:29:18.234 ************************************ 00:29:18.234 END TEST nvmf_tcp 00:29:18.234 ************************************ 00:29:18.234 00:29:18.234 real 17m2.705s 00:29:18.234 user 42m40.152s 00:29:18.234 sys 5m0.492s 00:29:18.234 14:33:45 nvmf_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:18.234 14:33:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:18.234 14:33:45 -- spdk/autotest.sh@281 -- # [[ 1 -eq 0 ]] 00:29:18.234 14:33:45 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:29:18.234 14:33:45 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:29:18.234 14:33:45 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:18.234 14:33:45 -- common/autotest_common.sh@10 -- # set +x 00:29:18.234 ************************************ 00:29:18.234 START TEST nvmf_dif 00:29:18.234 ************************************ 00:29:18.234 14:33:45 nvmf_dif -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:29:18.234 * Looking for test storage... 00:29:18.234 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:29:18.234 14:33:45 nvmf_dif -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:18.234 14:33:45 nvmf_dif -- common/autotest_common.sh@1691 -- # lcov --version 00:29:18.234 14:33:45 nvmf_dif -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:18.494 14:33:45 nvmf_dif -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:18.494 14:33:45 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:18.494 14:33:45 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:18.494 14:33:45 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:18.494 14:33:45 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:29:18.494 14:33:45 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:29:18.494 14:33:45 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:29:18.494 14:33:45 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:29:18.494 14:33:45 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:29:18.494 14:33:45 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:29:18.494 14:33:45 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:29:18.494 14:33:45 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:18.494 14:33:45 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:29:18.494 14:33:45 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:29:18.494 14:33:45 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:18.494 14:33:45 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:18.494 14:33:45 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:29:18.494 14:33:45 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:29:18.494 14:33:45 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:18.494 14:33:45 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:29:18.494 14:33:45 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:29:18.494 14:33:45 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:29:18.494 14:33:45 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:29:18.494 14:33:45 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:18.494 14:33:45 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:29:18.494 14:33:45 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:29:18.494 14:33:45 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:18.494 14:33:45 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:18.494 14:33:45 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:29:18.494 14:33:45 nvmf_dif -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:18.494 14:33:45 nvmf_dif -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:18.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:18.494 --rc genhtml_branch_coverage=1 00:29:18.494 --rc genhtml_function_coverage=1 00:29:18.494 --rc genhtml_legend=1 00:29:18.494 --rc geninfo_all_blocks=1 00:29:18.494 --rc geninfo_unexecuted_blocks=1 00:29:18.494 00:29:18.494 ' 00:29:18.494 14:33:45 nvmf_dif -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:18.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:18.494 --rc genhtml_branch_coverage=1 00:29:18.494 --rc genhtml_function_coverage=1 00:29:18.494 --rc genhtml_legend=1 00:29:18.495 --rc geninfo_all_blocks=1 00:29:18.495 --rc geninfo_unexecuted_blocks=1 00:29:18.495 00:29:18.495 ' 00:29:18.495 14:33:45 nvmf_dif -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:18.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:18.495 --rc genhtml_branch_coverage=1 00:29:18.495 --rc genhtml_function_coverage=1 00:29:18.495 --rc genhtml_legend=1 00:29:18.495 --rc geninfo_all_blocks=1 00:29:18.495 --rc geninfo_unexecuted_blocks=1 00:29:18.495 00:29:18.495 ' 00:29:18.495 14:33:45 nvmf_dif -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:18.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:18.495 --rc genhtml_branch_coverage=1 00:29:18.495 --rc genhtml_function_coverage=1 00:29:18.495 --rc genhtml_legend=1 00:29:18.495 --rc geninfo_all_blocks=1 00:29:18.495 --rc geninfo_unexecuted_blocks=1 00:29:18.495 00:29:18.495 ' 00:29:18.495 14:33:45 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:18.495 14:33:45 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:29:18.495 14:33:45 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:18.495 14:33:45 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:18.495 14:33:45 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:18.495 14:33:45 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:18.495 14:33:45 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:18.495 14:33:45 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:18.495 14:33:45 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:18.495 14:33:45 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:18.495 14:33:45 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:18.495 14:33:45 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:18.495 14:33:45 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:29:18.495 14:33:45 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:29:18.495 14:33:45 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:18.495 14:33:45 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:18.495 14:33:45 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:18.495 14:33:45 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:18.495 14:33:45 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:18.495 14:33:45 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:29:18.495 14:33:45 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:18.495 14:33:45 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:18.495 14:33:45 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:18.495 14:33:45 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.495 14:33:45 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.495 14:33:45 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.495 14:33:45 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:29:18.495 14:33:45 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.495 14:33:45 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:29:18.495 14:33:45 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:18.495 14:33:45 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:18.495 14:33:45 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:18.495 14:33:45 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:18.495 14:33:45 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:18.495 14:33:45 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:18.495 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:18.495 14:33:45 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:18.495 14:33:45 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:18.495 14:33:45 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:18.495 14:33:45 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:29:18.495 14:33:45 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:29:18.495 14:33:45 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:29:18.495 14:33:45 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:29:18.495 14:33:45 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:29:18.495 14:33:45 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:18.495 14:33:45 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:18.495 14:33:45 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:18.495 14:33:45 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:18.495 14:33:45 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:18.495 14:33:45 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:18.495 14:33:45 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:18.495 14:33:45 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:18.495 14:33:45 nvmf_dif -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:29:18.495 14:33:45 nvmf_dif -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:29:18.495 14:33:45 nvmf_dif -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:29:18.495 14:33:45 nvmf_dif -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:29:18.495 14:33:45 nvmf_dif -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:29:18.495 14:33:45 nvmf_dif -- nvmf/common.sh@460 -- # nvmf_veth_init 00:29:18.495 14:33:45 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:18.495 14:33:45 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:29:18.495 14:33:45 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:29:18.495 14:33:45 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:29:18.495 14:33:45 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:18.495 14:33:45 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:29:18.495 14:33:45 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:29:18.495 14:33:45 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:29:18.495 14:33:45 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:29:18.495 14:33:45 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:29:18.495 14:33:45 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:29:18.495 14:33:45 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:18.495 14:33:45 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:29:18.495 14:33:45 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:29:18.495 14:33:45 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:29:18.495 14:33:45 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:29:18.495 14:33:45 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:29:18.495 Cannot find device "nvmf_init_br" 00:29:18.495 14:33:45 nvmf_dif -- nvmf/common.sh@162 -- # true 00:29:18.495 14:33:45 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:29:18.495 Cannot find device "nvmf_init_br2" 00:29:18.495 14:33:45 nvmf_dif -- nvmf/common.sh@163 -- # true 00:29:18.495 14:33:45 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:29:18.495 Cannot find device "nvmf_tgt_br" 00:29:18.495 14:33:46 nvmf_dif -- nvmf/common.sh@164 -- # true 00:29:18.495 14:33:46 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:29:18.495 Cannot find device "nvmf_tgt_br2" 00:29:18.495 14:33:46 nvmf_dif -- nvmf/common.sh@165 -- # true 00:29:18.495 14:33:46 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:29:18.495 Cannot find device "nvmf_init_br" 00:29:18.495 14:33:46 nvmf_dif -- nvmf/common.sh@166 -- # true 00:29:18.495 14:33:46 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:29:18.495 Cannot find device "nvmf_init_br2" 00:29:18.495 14:33:46 nvmf_dif -- nvmf/common.sh@167 -- # true 00:29:18.495 14:33:46 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:29:18.495 Cannot find device "nvmf_tgt_br" 00:29:18.495 14:33:46 nvmf_dif -- nvmf/common.sh@168 -- # true 00:29:18.495 14:33:46 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:29:18.495 Cannot find device "nvmf_tgt_br2" 00:29:18.495 14:33:46 nvmf_dif -- nvmf/common.sh@169 -- # true 00:29:18.495 14:33:46 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:29:18.495 Cannot find device "nvmf_br" 00:29:18.495 14:33:46 nvmf_dif -- nvmf/common.sh@170 -- # true 00:29:18.495 14:33:46 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:29:18.755 Cannot find device "nvmf_init_if" 00:29:18.755 14:33:46 nvmf_dif -- nvmf/common.sh@171 -- # true 00:29:18.755 14:33:46 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:29:18.755 Cannot find device "nvmf_init_if2" 00:29:18.755 14:33:46 nvmf_dif -- nvmf/common.sh@172 -- # true 00:29:18.755 14:33:46 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:18.755 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:18.755 14:33:46 nvmf_dif -- nvmf/common.sh@173 -- # true 00:29:18.755 14:33:46 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:18.755 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:18.755 14:33:46 nvmf_dif -- nvmf/common.sh@174 -- # true 00:29:18.755 14:33:46 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:29:18.755 14:33:46 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:29:18.755 14:33:46 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:29:18.755 14:33:46 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:29:18.755 14:33:46 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:29:18.755 14:33:46 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:29:18.755 14:33:46 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:29:18.755 14:33:46 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:29:18.755 14:33:46 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:29:18.755 14:33:46 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:29:18.755 14:33:46 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:29:18.755 14:33:46 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:29:18.755 14:33:46 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:29:18.755 14:33:46 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:29:18.755 14:33:46 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:29:18.755 14:33:46 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:29:18.755 14:33:46 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:29:18.755 14:33:46 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:18.755 14:33:46 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:29:18.755 14:33:46 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:29:18.755 14:33:46 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:29:18.755 14:33:46 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:29:18.755 14:33:46 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:29:18.755 14:33:46 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:29:18.755 14:33:46 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:29:18.755 14:33:46 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:29:18.755 14:33:46 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:29:18.755 14:33:46 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:29:19.015 14:33:46 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:29:19.015 14:33:46 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:29:19.015 14:33:46 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:19.015 14:33:46 nvmf_dif -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:29:19.015 14:33:46 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:29:19.015 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:19.015 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.109 ms 00:29:19.015 00:29:19.015 --- 10.0.0.3 ping statistics --- 00:29:19.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:19.015 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:29:19.015 14:33:46 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:29:19.015 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:29:19.015 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.071 ms 00:29:19.015 00:29:19.015 --- 10.0.0.4 ping statistics --- 00:29:19.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:19.015 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:29:19.015 14:33:46 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:29:19.015 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:19.015 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:29:19.015 00:29:19.015 --- 10.0.0.1 ping statistics --- 00:29:19.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:19.015 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:29:19.015 14:33:46 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:29:19.015 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:19.015 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:29:19.015 00:29:19.015 --- 10.0.0.2 ping statistics --- 00:29:19.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:19.015 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:29:19.015 14:33:46 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:19.015 14:33:46 nvmf_dif -- nvmf/common.sh@461 -- # return 0 00:29:19.015 14:33:46 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:29:19.015 14:33:46 nvmf_dif -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:29:19.584 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:19.584 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:29:19.584 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:29:19.584 14:33:47 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:19.584 14:33:47 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:19.584 14:33:47 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:19.584 14:33:47 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:19.584 14:33:47 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:19.584 14:33:47 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:19.584 14:33:47 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:29:19.585 14:33:47 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:29:19.585 14:33:47 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:19.585 14:33:47 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:19.585 14:33:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:19.585 14:33:47 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=90072 00:29:19.585 14:33:47 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:29:19.585 14:33:47 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 90072 00:29:19.585 14:33:47 nvmf_dif -- common/autotest_common.sh@833 -- # '[' -z 90072 ']' 00:29:19.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:19.585 14:33:47 nvmf_dif -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:19.585 14:33:47 nvmf_dif -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:19.585 14:33:47 nvmf_dif -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:19.585 14:33:47 nvmf_dif -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:19.585 14:33:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:19.585 [2024-11-06 14:33:47.172514] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:29:19.585 [2024-11-06 14:33:47.172640] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:19.844 [2024-11-06 14:33:47.358455] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:20.103 [2024-11-06 14:33:47.502058] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:20.103 [2024-11-06 14:33:47.502238] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:20.103 [2024-11-06 14:33:47.502391] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:20.103 [2024-11-06 14:33:47.502456] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:20.103 [2024-11-06 14:33:47.502489] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:20.103 [2024-11-06 14:33:47.503628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:20.363 [2024-11-06 14:33:47.755356] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:29:20.363 14:33:47 nvmf_dif -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:20.363 14:33:47 nvmf_dif -- common/autotest_common.sh@866 -- # return 0 00:29:20.363 14:33:47 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:20.363 14:33:47 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:20.363 14:33:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:20.623 14:33:48 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:20.623 14:33:48 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:29:20.623 14:33:48 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:29:20.623 14:33:48 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:20.623 14:33:48 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:20.623 [2024-11-06 14:33:48.034346] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:20.623 14:33:48 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:20.623 14:33:48 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:29:20.623 14:33:48 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:29:20.623 14:33:48 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:20.623 14:33:48 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:20.623 ************************************ 00:29:20.623 START TEST fio_dif_1_default 00:29:20.623 ************************************ 00:29:20.623 14:33:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1127 -- # fio_dif_1 00:29:20.623 14:33:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:29:20.623 14:33:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:29:20.623 14:33:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:29:20.623 14:33:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:29:20.623 14:33:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:29:20.623 14:33:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:29:20.623 14:33:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:20.623 14:33:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:20.623 bdev_null0 00:29:20.623 14:33:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:20.623 14:33:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:20.623 14:33:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:20.623 14:33:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:20.623 14:33:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:20.623 14:33:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:20.623 14:33:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:20.623 14:33:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:20.623 14:33:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:20.623 14:33:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:29:20.623 14:33:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:20.623 14:33:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:20.623 [2024-11-06 14:33:48.098441] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:29:20.623 14:33:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:20.623 14:33:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:29:20.623 14:33:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:29:20.623 14:33:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:29:20.623 14:33:48 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:29:20.623 14:33:48 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:29:20.623 14:33:48 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:20.623 14:33:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:20.623 14:33:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:29:20.623 14:33:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:20.623 14:33:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:29:20.623 14:33:48 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:20.623 { 00:29:20.623 "params": { 00:29:20.623 "name": "Nvme$subsystem", 00:29:20.623 "trtype": "$TEST_TRANSPORT", 00:29:20.623 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:20.623 "adrfam": "ipv4", 00:29:20.623 "trsvcid": "$NVMF_PORT", 00:29:20.623 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:20.623 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:20.623 "hdgst": ${hdgst:-false}, 00:29:20.623 "ddgst": ${ddgst:-false} 00:29:20.623 }, 00:29:20.623 "method": "bdev_nvme_attach_controller" 00:29:20.623 } 00:29:20.623 EOF 00:29:20.623 )") 00:29:20.623 14:33:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:29:20.623 14:33:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:29:20.623 14:33:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:20.623 14:33:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local sanitizers 00:29:20.623 14:33:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:20.623 14:33:48 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:29:20.623 14:33:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # shift 00:29:20.623 14:33:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # local asan_lib= 00:29:20.623 14:33:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:29:20.623 14:33:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:29:20.623 14:33:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:29:20.623 14:33:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:20.623 14:33:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:29:20.623 14:33:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # grep libasan 00:29:20.623 14:33:48 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:29:20.623 14:33:48 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:29:20.623 14:33:48 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:20.623 "params": { 00:29:20.623 "name": "Nvme0", 00:29:20.623 "trtype": "tcp", 00:29:20.623 "traddr": "10.0.0.3", 00:29:20.623 "adrfam": "ipv4", 00:29:20.623 "trsvcid": "4420", 00:29:20.623 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:20.623 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:20.623 "hdgst": false, 00:29:20.623 "ddgst": false 00:29:20.623 }, 00:29:20.623 "method": "bdev_nvme_attach_controller" 00:29:20.623 }' 00:29:20.623 14:33:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:29:20.623 14:33:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:29:20.623 14:33:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # break 00:29:20.623 14:33:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:29:20.623 14:33:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:20.883 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:29:20.883 fio-3.35 00:29:20.883 Starting 1 thread 00:29:33.095 00:29:33.095 filename0: (groupid=0, jobs=1): err= 0: pid=90136: Wed Nov 6 14:33:59 2024 00:29:33.095 read: IOPS=10.1k, BW=39.3MiB/s (41.2MB/s)(393MiB/10001msec) 00:29:33.095 slat (nsec): min=6488, max=87460, avg=7517.56, stdev=1960.99 00:29:33.095 clat (usec): min=337, max=4413, avg=376.57, stdev=39.13 00:29:33.095 lat (usec): min=344, max=4421, avg=384.09, stdev=39.77 00:29:33.095 clat percentiles (usec): 00:29:33.095 | 1.00th=[ 351], 5.00th=[ 355], 10.00th=[ 359], 20.00th=[ 363], 00:29:33.095 | 30.00th=[ 367], 40.00th=[ 371], 50.00th=[ 375], 60.00th=[ 379], 00:29:33.095 | 70.00th=[ 383], 80.00th=[ 388], 90.00th=[ 396], 95.00th=[ 404], 00:29:33.095 | 99.00th=[ 457], 99.50th=[ 478], 99.90th=[ 627], 99.95th=[ 676], 00:29:33.095 | 99.99th=[ 2540] 00:29:33.095 bw ( KiB/s): min=37152, max=40800, per=100.00%, avg=40239.05, stdev=833.12, samples=19 00:29:33.095 iops : min= 9288, max=10200, avg=10059.74, stdev=208.27, samples=19 00:29:33.095 lat (usec) : 500=99.70%, 750=0.27%, 1000=0.02% 00:29:33.095 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01% 00:29:33.095 cpu : usr=82.46%, sys=15.76%, ctx=46, majf=0, minf=1073 00:29:33.095 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:33.095 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:33.095 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:33.095 issued rwts: total=100548,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:33.095 latency : target=0, window=0, percentile=100.00%, depth=4 00:29:33.095 00:29:33.095 Run status group 0 (all jobs): 00:29:33.095 READ: bw=39.3MiB/s (41.2MB/s), 39.3MiB/s-39.3MiB/s (41.2MB/s-41.2MB/s), io=393MiB (412MB), run=10001-10001msec 00:29:33.095 ----------------------------------------------------- 00:29:33.095 Suppressions used: 00:29:33.095 count bytes template 00:29:33.095 1 8 /usr/src/fio/parse.c 00:29:33.095 1 8 libtcmalloc_minimal.so 00:29:33.095 1 904 libcrypto.so 00:29:33.096 ----------------------------------------------------- 00:29:33.096 00:29:33.096 14:34:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:29:33.096 14:34:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:29:33.096 14:34:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:29:33.096 14:34:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:33.096 14:34:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:29:33.096 14:34:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:33.096 14:34:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.096 14:34:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:33.096 14:34:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.096 14:34:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:33.096 14:34:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.096 14:34:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:33.096 ************************************ 00:29:33.096 END TEST fio_dif_1_default 00:29:33.096 ************************************ 00:29:33.096 14:34:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.096 00:29:33.096 real 0m12.583s 00:29:33.096 user 0m10.272s 00:29:33.096 sys 0m2.088s 00:29:33.096 14:34:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:33.096 14:34:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:33.096 14:34:00 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:29:33.096 14:34:00 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:29:33.096 14:34:00 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:33.096 14:34:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:33.096 ************************************ 00:29:33.096 START TEST fio_dif_1_multi_subsystems 00:29:33.096 ************************************ 00:29:33.096 14:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1127 -- # fio_dif_1_multi_subsystems 00:29:33.096 14:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:29:33.096 14:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:29:33.096 14:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:29:33.096 14:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:29:33.096 14:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:29:33.096 14:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:29:33.096 14:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:29:33.096 14:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.096 14:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:33.096 bdev_null0 00:29:33.096 14:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.096 14:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:33.096 14:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.096 14:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:33.355 14:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.355 14:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:33.355 14:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.355 14:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:33.355 14:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.355 14:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:29:33.355 14:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.355 14:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:33.355 [2024-11-06 14:34:00.760216] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:29:33.355 14:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.355 14:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:29:33.355 14:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:29:33.356 14:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:29:33.356 14:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:29:33.356 14:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.356 14:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:33.356 bdev_null1 00:29:33.356 14:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.356 14:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:29:33.356 14:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.356 14:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:33.356 14:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.356 14:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:29:33.356 14:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.356 14:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:33.356 14:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.356 14:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:29:33.356 14:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.356 14:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:33.356 14:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.356 14:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:29:33.356 14:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:29:33.356 14:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:29:33.356 14:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:29:33.356 14:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:29:33.356 14:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:33.356 14:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:33.356 { 00:29:33.356 "params": { 00:29:33.356 "name": "Nvme$subsystem", 00:29:33.356 "trtype": "$TEST_TRANSPORT", 00:29:33.356 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:33.356 "adrfam": "ipv4", 00:29:33.356 "trsvcid": "$NVMF_PORT", 00:29:33.356 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:33.356 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:33.356 "hdgst": ${hdgst:-false}, 00:29:33.356 "ddgst": ${ddgst:-false} 00:29:33.356 }, 00:29:33.356 "method": "bdev_nvme_attach_controller" 00:29:33.356 } 00:29:33.356 EOF 00:29:33.356 )") 00:29:33.356 14:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:33.356 14:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:29:33.356 14:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:29:33.356 14:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:29:33.356 14:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:29:33.356 14:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:33.356 14:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:29:33.356 14:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:33.356 14:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local sanitizers 00:29:33.356 14:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:33.356 14:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # shift 00:29:33.356 14:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:29:33.356 14:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # local asan_lib= 00:29:33.356 14:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:29:33.356 14:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:29:33.356 14:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:29:33.356 14:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:33.356 14:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:29:33.356 14:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:33.356 { 00:29:33.356 "params": { 00:29:33.356 "name": "Nvme$subsystem", 00:29:33.356 "trtype": "$TEST_TRANSPORT", 00:29:33.356 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:33.356 "adrfam": "ipv4", 00:29:33.356 "trsvcid": "$NVMF_PORT", 00:29:33.356 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:33.356 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:33.356 "hdgst": ${hdgst:-false}, 00:29:33.356 "ddgst": ${ddgst:-false} 00:29:33.356 }, 00:29:33.356 "method": "bdev_nvme_attach_controller" 00:29:33.356 } 00:29:33.356 EOF 00:29:33.356 )") 00:29:33.356 14:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:29:33.356 14:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:33.356 14:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # grep libasan 00:29:33.356 14:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:29:33.356 14:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:29:33.356 14:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:29:33.356 14:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:29:33.356 14:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:33.356 "params": { 00:29:33.356 "name": "Nvme0", 00:29:33.356 "trtype": "tcp", 00:29:33.356 "traddr": "10.0.0.3", 00:29:33.356 "adrfam": "ipv4", 00:29:33.356 "trsvcid": "4420", 00:29:33.356 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:33.356 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:33.356 "hdgst": false, 00:29:33.356 "ddgst": false 00:29:33.356 }, 00:29:33.356 "method": "bdev_nvme_attach_controller" 00:29:33.356 },{ 00:29:33.356 "params": { 00:29:33.356 "name": "Nvme1", 00:29:33.356 "trtype": "tcp", 00:29:33.356 "traddr": "10.0.0.3", 00:29:33.356 "adrfam": "ipv4", 00:29:33.356 "trsvcid": "4420", 00:29:33.356 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:33.356 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:33.356 "hdgst": false, 00:29:33.356 "ddgst": false 00:29:33.356 }, 00:29:33.356 "method": "bdev_nvme_attach_controller" 00:29:33.356 }' 00:29:33.356 14:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:29:33.356 14:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:29:33.356 14:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # break 00:29:33.356 14:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:29:33.356 14:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:33.615 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:29:33.615 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:29:33.615 fio-3.35 00:29:33.615 Starting 2 threads 00:29:45.838 00:29:45.838 filename0: (groupid=0, jobs=1): err= 0: pid=90301: Wed Nov 6 14:34:12 2024 00:29:45.838 read: IOPS=5336, BW=20.8MiB/s (21.9MB/s)(208MiB/10001msec) 00:29:45.838 slat (usec): min=4, max=182, avg=12.78, stdev= 3.26 00:29:45.838 clat (usec): min=389, max=4561, avg=714.89, stdev=63.04 00:29:45.838 lat (usec): min=396, max=4576, avg=727.67, stdev=63.35 00:29:45.838 clat percentiles (usec): 00:29:45.838 | 1.00th=[ 660], 5.00th=[ 676], 10.00th=[ 685], 20.00th=[ 693], 00:29:45.838 | 30.00th=[ 701], 40.00th=[ 709], 50.00th=[ 709], 60.00th=[ 717], 00:29:45.838 | 70.00th=[ 725], 80.00th=[ 734], 90.00th=[ 742], 95.00th=[ 750], 00:29:45.838 | 99.00th=[ 775], 99.50th=[ 799], 99.90th=[ 1500], 99.95th=[ 2311], 00:29:45.838 | 99.99th=[ 2802] 00:29:45.838 bw ( KiB/s): min=20288, max=21600, per=50.09%, avg=21393.95, stdev=298.60, samples=19 00:29:45.838 iops : min= 5072, max= 5400, avg=5348.47, stdev=74.67, samples=19 00:29:45.838 lat (usec) : 500=0.04%, 750=94.02%, 1000=5.69% 00:29:45.838 lat (msec) : 2=0.19%, 4=0.05%, 10=0.01% 00:29:45.838 cpu : usr=89.39%, sys=9.36%, ctx=100, majf=0, minf=1072 00:29:45.838 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:45.838 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:45.838 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:45.838 issued rwts: total=53372,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:45.838 latency : target=0, window=0, percentile=100.00%, depth=4 00:29:45.838 filename1: (groupid=0, jobs=1): err= 0: pid=90302: Wed Nov 6 14:34:12 2024 00:29:45.838 read: IOPS=5341, BW=20.9MiB/s (21.9MB/s)(209MiB/10001msec) 00:29:45.838 slat (nsec): min=6666, max=58669, avg=12793.83, stdev=3104.86 00:29:45.838 clat (usec): min=369, max=4480, avg=714.49, stdev=66.53 00:29:45.838 lat (usec): min=376, max=4495, avg=727.28, stdev=67.40 00:29:45.838 clat percentiles (usec): 00:29:45.838 | 1.00th=[ 627], 5.00th=[ 652], 10.00th=[ 660], 20.00th=[ 685], 00:29:45.838 | 30.00th=[ 701], 40.00th=[ 709], 50.00th=[ 717], 60.00th=[ 725], 00:29:45.838 | 70.00th=[ 734], 80.00th=[ 742], 90.00th=[ 750], 95.00th=[ 766], 00:29:45.838 | 99.00th=[ 783], 99.50th=[ 799], 99.90th=[ 1500], 99.95th=[ 2311], 00:29:45.838 | 99.99th=[ 2573] 00:29:45.838 bw ( KiB/s): min=20288, max=21632, per=50.14%, avg=21416.42, stdev=292.67, samples=19 00:29:45.838 iops : min= 5072, max= 5408, avg=5354.11, stdev=73.17, samples=19 00:29:45.838 lat (usec) : 500=0.09%, 750=88.07%, 1000=11.67% 00:29:45.838 lat (msec) : 2=0.11%, 4=0.06%, 10=0.01% 00:29:45.838 cpu : usr=89.68%, sys=9.16%, ctx=10, majf=0, minf=1075 00:29:45.838 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:45.838 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:45.838 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:45.838 issued rwts: total=53416,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:45.838 latency : target=0, window=0, percentile=100.00%, depth=4 00:29:45.838 00:29:45.838 Run status group 0 (all jobs): 00:29:45.838 READ: bw=41.7MiB/s (43.7MB/s), 20.8MiB/s-20.9MiB/s (21.9MB/s-21.9MB/s), io=417MiB (437MB), run=10001-10001msec 00:29:46.098 ----------------------------------------------------- 00:29:46.098 Suppressions used: 00:29:46.098 count bytes template 00:29:46.098 2 16 /usr/src/fio/parse.c 00:29:46.098 1 8 libtcmalloc_minimal.so 00:29:46.098 1 904 libcrypto.so 00:29:46.098 ----------------------------------------------------- 00:29:46.098 00:29:46.098 14:34:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:29:46.098 14:34:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:29:46.098 14:34:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:29:46.098 14:34:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:46.098 14:34:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:29:46.098 14:34:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:46.098 14:34:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.098 14:34:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:46.098 14:34:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.099 14:34:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:46.099 14:34:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.099 14:34:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:46.099 14:34:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.099 14:34:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:29:46.099 14:34:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:29:46.099 14:34:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:29:46.099 14:34:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:46.099 14:34:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.099 14:34:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:46.099 14:34:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.099 14:34:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:29:46.099 14:34:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.099 14:34:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:46.099 ************************************ 00:29:46.099 END TEST fio_dif_1_multi_subsystems 00:29:46.099 ************************************ 00:29:46.099 14:34:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.099 00:29:46.099 real 0m12.871s 00:29:46.099 user 0m20.216s 00:29:46.099 sys 0m2.367s 00:29:46.099 14:34:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:46.099 14:34:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:46.099 14:34:13 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:29:46.099 14:34:13 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:29:46.099 14:34:13 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:46.099 14:34:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:46.099 ************************************ 00:29:46.099 START TEST fio_dif_rand_params 00:29:46.099 ************************************ 00:29:46.099 14:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1127 -- # fio_dif_rand_params 00:29:46.099 14:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:29:46.099 14:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:29:46.099 14:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:29:46.099 14:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:29:46.099 14:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:29:46.099 14:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:29:46.099 14:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:29:46.099 14:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:29:46.099 14:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:29:46.099 14:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:46.099 14:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:29:46.099 14:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:29:46.099 14:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:29:46.099 14:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.099 14:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:46.099 bdev_null0 00:29:46.099 14:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.099 14:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:46.099 14:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.099 14:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:46.099 14:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.099 14:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:46.099 14:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.099 14:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:46.099 14:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.099 14:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:29:46.099 14:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.099 14:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:46.099 [2024-11-06 14:34:13.704489] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:29:46.099 14:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.099 14:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:29:46.099 14:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:29:46.099 14:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:29:46.099 14:34:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:29:46.099 14:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:46.099 14:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:29:46.099 14:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:46.099 14:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:29:46.099 14:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:29:46.099 14:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:29:46.099 14:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:46.099 14:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:29:46.099 14:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:46.099 14:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:29:46.099 14:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:29:46.099 14:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:29:46.099 14:34:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:29:46.099 14:34:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:46.099 14:34:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:46.099 { 00:29:46.099 "params": { 00:29:46.099 "name": "Nvme$subsystem", 00:29:46.099 "trtype": "$TEST_TRANSPORT", 00:29:46.099 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:46.099 "adrfam": "ipv4", 00:29:46.099 "trsvcid": "$NVMF_PORT", 00:29:46.099 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:46.099 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:46.099 "hdgst": ${hdgst:-false}, 00:29:46.099 "ddgst": ${ddgst:-false} 00:29:46.099 }, 00:29:46.099 "method": "bdev_nvme_attach_controller" 00:29:46.099 } 00:29:46.099 EOF 00:29:46.099 )") 00:29:46.099 14:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:46.099 14:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:29:46.099 14:34:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:46.099 14:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:29:46.099 14:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:29:46.099 14:34:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:29:46.099 14:34:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:29:46.099 14:34:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:29:46.099 14:34:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:46.099 "params": { 00:29:46.099 "name": "Nvme0", 00:29:46.099 "trtype": "tcp", 00:29:46.099 "traddr": "10.0.0.3", 00:29:46.099 "adrfam": "ipv4", 00:29:46.099 "trsvcid": "4420", 00:29:46.099 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:46.099 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:46.099 "hdgst": false, 00:29:46.099 "ddgst": false 00:29:46.099 }, 00:29:46.099 "method": "bdev_nvme_attach_controller" 00:29:46.099 }' 00:29:46.359 14:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:29:46.359 14:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:29:46.359 14:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # break 00:29:46.359 14:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:29:46.359 14:34:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:46.359 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:29:46.359 ... 00:29:46.359 fio-3.35 00:29:46.359 Starting 3 threads 00:29:52.961 00:29:52.961 filename0: (groupid=0, jobs=1): err= 0: pid=90473: Wed Nov 6 14:34:19 2024 00:29:52.961 read: IOPS=277, BW=34.6MiB/s (36.3MB/s)(174MiB/5011msec) 00:29:52.961 slat (nsec): min=7011, max=73002, avg=23596.02, stdev=15325.75 00:29:52.961 clat (usec): min=7100, max=24743, avg=10762.47, stdev=782.07 00:29:52.961 lat (usec): min=7108, max=24760, avg=10786.07, stdev=781.67 00:29:52.961 clat percentiles (usec): 00:29:52.961 | 1.00th=[10552], 5.00th=[10552], 10.00th=[10552], 20.00th=[10552], 00:29:52.961 | 30.00th=[10552], 40.00th=[10683], 50.00th=[10683], 60.00th=[10683], 00:29:52.961 | 70.00th=[10683], 80.00th=[10814], 90.00th=[10945], 95.00th=[11207], 00:29:52.961 | 99.00th=[12649], 99.50th=[14615], 99.90th=[24773], 99.95th=[24773], 00:29:52.961 | 99.99th=[24773] 00:29:52.961 bw ( KiB/s): min=33792, max=36096, per=33.38%, avg=35481.60, stdev=705.74, samples=10 00:29:52.961 iops : min= 264, max= 282, avg=277.20, stdev= 5.51, samples=10 00:29:52.961 lat (msec) : 10=0.22%, 20=99.57%, 50=0.22% 00:29:52.961 cpu : usr=94.31%, sys=5.19%, ctx=11, majf=0, minf=1075 00:29:52.961 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:52.961 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:52.961 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:52.961 issued rwts: total=1389,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:52.961 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:52.961 filename0: (groupid=0, jobs=1): err= 0: pid=90474: Wed Nov 6 14:34:19 2024 00:29:52.961 read: IOPS=277, BW=34.6MiB/s (36.3MB/s)(173MiB/5003msec) 00:29:52.961 slat (nsec): min=6582, max=93222, avg=17173.13, stdev=10315.29 00:29:52.961 clat (usec): min=9240, max=22753, avg=10785.56, stdev=751.55 00:29:52.961 lat (usec): min=9249, max=22770, avg=10802.73, stdev=751.85 00:29:52.961 clat percentiles (usec): 00:29:52.961 | 1.00th=[10552], 5.00th=[10552], 10.00th=[10552], 20.00th=[10683], 00:29:52.961 | 30.00th=[10683], 40.00th=[10683], 50.00th=[10683], 60.00th=[10683], 00:29:52.961 | 70.00th=[10683], 80.00th=[10814], 90.00th=[10945], 95.00th=[11076], 00:29:52.961 | 99.00th=[12649], 99.50th=[14484], 99.90th=[22676], 99.95th=[22676], 00:29:52.961 | 99.99th=[22676] 00:29:52.961 bw ( KiB/s): min=33090, max=36096, per=33.32%, avg=35420.67, stdev=954.64, samples=9 00:29:52.961 iops : min= 258, max= 282, avg=276.67, stdev= 7.62, samples=9 00:29:52.961 lat (msec) : 10=0.22%, 20=99.57%, 50=0.22% 00:29:52.961 cpu : usr=92.02%, sys=7.46%, ctx=13, majf=0, minf=1075 00:29:52.961 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:52.961 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:52.961 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:52.961 issued rwts: total=1386,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:52.961 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:52.961 filename0: (groupid=0, jobs=1): err= 0: pid=90475: Wed Nov 6 14:34:19 2024 00:29:52.961 read: IOPS=276, BW=34.6MiB/s (36.3MB/s)(173MiB/5004msec) 00:29:52.961 slat (nsec): min=7049, max=85330, avg=23387.40, stdev=14369.37 00:29:52.961 clat (usec): min=10453, max=20128, avg=10774.01, stdev=694.41 00:29:52.961 lat (usec): min=10469, max=20203, avg=10797.40, stdev=694.52 00:29:52.961 clat percentiles (usec): 00:29:52.961 | 1.00th=[10552], 5.00th=[10552], 10.00th=[10552], 20.00th=[10552], 00:29:52.961 | 30.00th=[10552], 40.00th=[10683], 50.00th=[10683], 60.00th=[10683], 00:29:52.961 | 70.00th=[10683], 80.00th=[10814], 90.00th=[10945], 95.00th=[11207], 00:29:52.961 | 99.00th=[12649], 99.50th=[14615], 99.90th=[20055], 99.95th=[20055], 00:29:52.961 | 99.99th=[20055] 00:29:52.962 bw ( KiB/s): min=33024, max=36096, per=33.32%, avg=35413.33, stdev=974.82, samples=9 00:29:52.962 iops : min= 258, max= 282, avg=276.67, stdev= 7.62, samples=9 00:29:52.962 lat (msec) : 20=99.78%, 50=0.22% 00:29:52.962 cpu : usr=93.98%, sys=5.44%, ctx=51, majf=0, minf=1073 00:29:52.962 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:52.962 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:52.962 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:52.962 issued rwts: total=1386,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:52.962 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:52.962 00:29:52.962 Run status group 0 (all jobs): 00:29:52.962 READ: bw=104MiB/s (109MB/s), 34.6MiB/s-34.6MiB/s (36.3MB/s-36.3MB/s), io=520MiB (545MB), run=5003-5011msec 00:29:53.900 ----------------------------------------------------- 00:29:53.900 Suppressions used: 00:29:53.900 count bytes template 00:29:53.900 5 44 /usr/src/fio/parse.c 00:29:53.900 1 8 libtcmalloc_minimal.so 00:29:53.900 1 904 libcrypto.so 00:29:53.900 ----------------------------------------------------- 00:29:53.900 00:29:53.900 14:34:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:29:53.900 14:34:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:29:53.900 14:34:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:53.900 14:34:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:53.900 14:34:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:29:53.900 14:34:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:53.900 14:34:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:53.900 14:34:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:53.900 14:34:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:53.900 14:34:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:53.900 14:34:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:53.900 14:34:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:53.900 14:34:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:53.900 14:34:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:29:53.900 14:34:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:29:53.900 14:34:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:29:53.900 14:34:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:29:53.900 14:34:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:29:53.900 14:34:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:29:53.900 14:34:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:29:53.900 14:34:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:29:53.900 14:34:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:53.900 14:34:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:29:53.900 14:34:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:29:53.900 14:34:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:29:53.900 14:34:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:53.900 14:34:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:53.900 bdev_null0 00:29:53.900 14:34:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:53.900 14:34:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:53.900 14:34:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:53.900 14:34:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:53.900 14:34:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:53.900 14:34:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:53.900 14:34:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:53.900 14:34:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:53.900 14:34:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:53.900 14:34:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:29:53.900 14:34:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:53.900 14:34:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:53.900 [2024-11-06 14:34:21.298979] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:29:53.900 14:34:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:53.900 14:34:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:53.900 14:34:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:29:53.900 14:34:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:29:53.900 14:34:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:29:53.900 14:34:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:53.900 14:34:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:53.900 bdev_null1 00:29:53.900 14:34:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:53.900 14:34:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:29:53.900 14:34:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:53.900 14:34:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:53.900 14:34:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:53.900 14:34:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:29:53.900 14:34:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:53.900 14:34:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:53.900 14:34:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:53.900 14:34:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:29:53.900 14:34:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:53.900 14:34:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:53.900 14:34:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:53.900 14:34:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:53.900 14:34:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:29:53.900 14:34:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:29:53.900 14:34:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:29:53.900 14:34:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:53.900 14:34:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:53.900 bdev_null2 00:29:53.900 14:34:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:53.901 14:34:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:29:53.901 14:34:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:53.901 14:34:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:53.901 14:34:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:53.901 14:34:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:29:53.901 14:34:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:53.901 14:34:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:53.901 14:34:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:53.901 14:34:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:29:53.901 14:34:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:53.901 14:34:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:53.901 14:34:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:53.901 14:34:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:29:53.901 14:34:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:29:53.901 14:34:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:29:53.901 14:34:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:29:53.901 14:34:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:29:53.901 14:34:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:53.901 14:34:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:53.901 { 00:29:53.901 "params": { 00:29:53.901 "name": "Nvme$subsystem", 00:29:53.901 "trtype": "$TEST_TRANSPORT", 00:29:53.901 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:53.901 "adrfam": "ipv4", 00:29:53.901 "trsvcid": "$NVMF_PORT", 00:29:53.901 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:53.901 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:53.901 "hdgst": ${hdgst:-false}, 00:29:53.901 "ddgst": ${ddgst:-false} 00:29:53.901 }, 00:29:53.901 "method": "bdev_nvme_attach_controller" 00:29:53.901 } 00:29:53.901 EOF 00:29:53.901 )") 00:29:53.901 14:34:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:53.901 14:34:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:29:53.901 14:34:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:29:53.901 14:34:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:29:53.901 14:34:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:29:53.901 14:34:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:53.901 14:34:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:29:53.901 14:34:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:53.901 14:34:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:29:53.901 14:34:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:53.901 14:34:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:29:53.901 14:34:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:29:53.901 14:34:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:53.901 14:34:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:29:53.901 14:34:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:29:53.901 14:34:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:29:53.901 14:34:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:53.901 14:34:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:53.901 { 00:29:53.901 "params": { 00:29:53.901 "name": "Nvme$subsystem", 00:29:53.901 "trtype": "$TEST_TRANSPORT", 00:29:53.901 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:53.901 "adrfam": "ipv4", 00:29:53.901 "trsvcid": "$NVMF_PORT", 00:29:53.901 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:53.901 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:53.901 "hdgst": ${hdgst:-false}, 00:29:53.901 "ddgst": ${ddgst:-false} 00:29:53.901 }, 00:29:53.901 "method": "bdev_nvme_attach_controller" 00:29:53.901 } 00:29:53.901 EOF 00:29:53.901 )") 00:29:53.901 14:34:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:29:53.901 14:34:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:29:53.901 14:34:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:53.901 14:34:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:29:53.901 14:34:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:29:53.901 14:34:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:53.901 14:34:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:29:53.901 14:34:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:53.901 14:34:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:53.901 { 00:29:53.901 "params": { 00:29:53.901 "name": "Nvme$subsystem", 00:29:53.901 "trtype": "$TEST_TRANSPORT", 00:29:53.901 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:53.901 "adrfam": "ipv4", 00:29:53.901 "trsvcid": "$NVMF_PORT", 00:29:53.901 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:53.901 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:53.901 "hdgst": ${hdgst:-false}, 00:29:53.901 "ddgst": ${ddgst:-false} 00:29:53.901 }, 00:29:53.901 "method": "bdev_nvme_attach_controller" 00:29:53.901 } 00:29:53.901 EOF 00:29:53.901 )") 00:29:53.901 14:34:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:29:53.901 14:34:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:29:53.901 14:34:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:53.901 14:34:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:29:53.901 14:34:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:29:53.901 14:34:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:53.901 "params": { 00:29:53.901 "name": "Nvme0", 00:29:53.901 "trtype": "tcp", 00:29:53.901 "traddr": "10.0.0.3", 00:29:53.901 "adrfam": "ipv4", 00:29:53.901 "trsvcid": "4420", 00:29:53.901 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:53.901 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:53.901 "hdgst": false, 00:29:53.901 "ddgst": false 00:29:53.901 }, 00:29:53.901 "method": "bdev_nvme_attach_controller" 00:29:53.901 },{ 00:29:53.901 "params": { 00:29:53.901 "name": "Nvme1", 00:29:53.901 "trtype": "tcp", 00:29:53.901 "traddr": "10.0.0.3", 00:29:53.901 "adrfam": "ipv4", 00:29:53.901 "trsvcid": "4420", 00:29:53.901 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:53.901 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:53.901 "hdgst": false, 00:29:53.901 "ddgst": false 00:29:53.901 }, 00:29:53.901 "method": "bdev_nvme_attach_controller" 00:29:53.901 },{ 00:29:53.901 "params": { 00:29:53.901 "name": "Nvme2", 00:29:53.901 "trtype": "tcp", 00:29:53.901 "traddr": "10.0.0.3", 00:29:53.901 "adrfam": "ipv4", 00:29:53.901 "trsvcid": "4420", 00:29:53.901 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:53.901 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:53.901 "hdgst": false, 00:29:53.901 "ddgst": false 00:29:53.901 }, 00:29:53.901 "method": "bdev_nvme_attach_controller" 00:29:53.901 }' 00:29:53.901 14:34:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:29:53.901 14:34:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:29:53.901 14:34:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # break 00:29:53.901 14:34:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:29:53.901 14:34:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:54.161 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:29:54.161 ... 00:29:54.161 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:29:54.161 ... 00:29:54.161 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:29:54.161 ... 00:29:54.161 fio-3.35 00:29:54.161 Starting 24 threads 00:30:06.408 00:30:06.408 filename0: (groupid=0, jobs=1): err= 0: pid=90574: Wed Nov 6 14:34:33 2024 00:30:06.408 read: IOPS=236, BW=945KiB/s (968kB/s)(9488KiB/10037msec) 00:30:06.408 slat (usec): min=3, max=9045, avg=45.85, stdev=412.05 00:30:06.408 clat (msec): min=23, max=145, avg=67.45, stdev=20.13 00:30:06.408 lat (msec): min=23, max=145, avg=67.49, stdev=20.14 00:30:06.408 clat percentiles (msec): 00:30:06.408 | 1.00th=[ 32], 5.00th=[ 41], 10.00th=[ 45], 20.00th=[ 48], 00:30:06.408 | 30.00th=[ 57], 40.00th=[ 63], 50.00th=[ 67], 60.00th=[ 71], 00:30:06.408 | 70.00th=[ 72], 80.00th=[ 81], 90.00th=[ 96], 95.00th=[ 108], 00:30:06.408 | 99.00th=[ 129], 99.50th=[ 142], 99.90th=[ 142], 99.95th=[ 146], 00:30:06.408 | 99.99th=[ 146] 00:30:06.408 bw ( KiB/s): min= 528, max= 1165, per=4.07%, avg=943.45, stdev=169.53, samples=20 00:30:06.408 iops : min= 132, max= 291, avg=235.85, stdev=42.36, samples=20 00:30:06.408 lat (msec) : 50=22.47%, 100=70.15%, 250=7.38% 00:30:06.408 cpu : usr=39.09%, sys=1.73%, ctx=1277, majf=0, minf=1073 00:30:06.408 IO depths : 1=0.1%, 2=0.9%, 4=3.7%, 8=79.5%, 16=15.9%, 32=0.0%, >=64=0.0% 00:30:06.408 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.408 complete : 0=0.0%, 4=88.2%, 8=11.0%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.408 issued rwts: total=2372,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:06.408 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:06.408 filename0: (groupid=0, jobs=1): err= 0: pid=90575: Wed Nov 6 14:34:33 2024 00:30:06.408 read: IOPS=245, BW=980KiB/s (1004kB/s)(9848KiB/10048msec) 00:30:06.408 slat (usec): min=4, max=8041, avg=23.36, stdev=181.13 00:30:06.408 clat (msec): min=5, max=142, avg=65.08, stdev=22.48 00:30:06.408 lat (msec): min=5, max=142, avg=65.10, stdev=22.48 00:30:06.408 clat percentiles (msec): 00:30:06.408 | 1.00th=[ 6], 5.00th=[ 16], 10.00th=[ 40], 20.00th=[ 48], 00:30:06.408 | 30.00th=[ 58], 40.00th=[ 63], 50.00th=[ 68], 60.00th=[ 72], 00:30:06.408 | 70.00th=[ 72], 80.00th=[ 82], 90.00th=[ 96], 95.00th=[ 106], 00:30:06.408 | 99.00th=[ 111], 99.50th=[ 117], 99.90th=[ 124], 99.95th=[ 132], 00:30:06.408 | 99.99th=[ 142] 00:30:06.408 bw ( KiB/s): min= 712, max= 2048, per=4.23%, avg=980.80, stdev=278.06, samples=20 00:30:06.408 iops : min= 178, max= 512, avg=245.20, stdev=69.51, samples=20 00:30:06.408 lat (msec) : 10=3.09%, 20=2.76%, 50=19.82%, 100=68.12%, 250=6.21% 00:30:06.408 cpu : usr=37.31%, sys=1.74%, ctx=1291, majf=0, minf=1075 00:30:06.408 IO depths : 1=0.2%, 2=0.9%, 4=3.0%, 8=79.6%, 16=16.2%, 32=0.0%, >=64=0.0% 00:30:06.408 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.408 complete : 0=0.0%, 4=88.4%, 8=10.9%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.408 issued rwts: total=2462,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:06.408 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:06.408 filename0: (groupid=0, jobs=1): err= 0: pid=90576: Wed Nov 6 14:34:33 2024 00:30:06.408 read: IOPS=240, BW=963KiB/s (986kB/s)(9648KiB/10016msec) 00:30:06.408 slat (usec): min=3, max=9040, avg=35.57, stdev=298.76 00:30:06.408 clat (msec): min=20, max=149, avg=66.27, stdev=20.74 00:30:06.408 lat (msec): min=20, max=149, avg=66.31, stdev=20.73 00:30:06.408 clat percentiles (msec): 00:30:06.408 | 1.00th=[ 33], 5.00th=[ 39], 10.00th=[ 45], 20.00th=[ 48], 00:30:06.408 | 30.00th=[ 50], 40.00th=[ 61], 50.00th=[ 64], 60.00th=[ 71], 00:30:06.408 | 70.00th=[ 72], 80.00th=[ 83], 90.00th=[ 97], 95.00th=[ 107], 00:30:06.408 | 99.00th=[ 121], 99.50th=[ 127], 99.90th=[ 133], 99.95th=[ 150], 00:30:06.408 | 99.99th=[ 150] 00:30:06.408 bw ( KiB/s): min= 640, max= 1160, per=4.09%, avg=948.16, stdev=168.47, samples=19 00:30:06.408 iops : min= 160, max= 290, avg=237.00, stdev=42.16, samples=19 00:30:06.408 lat (msec) : 50=30.72%, 100=60.41%, 250=8.87% 00:30:06.408 cpu : usr=35.88%, sys=1.80%, ctx=1030, majf=0, minf=1074 00:30:06.408 IO depths : 1=0.1%, 2=1.1%, 4=4.1%, 8=79.5%, 16=15.2%, 32=0.0%, >=64=0.0% 00:30:06.408 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.408 complete : 0=0.0%, 4=87.9%, 8=11.2%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.408 issued rwts: total=2412,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:06.408 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:06.408 filename0: (groupid=0, jobs=1): err= 0: pid=90577: Wed Nov 6 14:34:33 2024 00:30:06.408 read: IOPS=240, BW=964KiB/s (987kB/s)(9684KiB/10048msec) 00:30:06.408 slat (usec): min=4, max=9030, avg=36.22, stdev=373.59 00:30:06.408 clat (msec): min=14, max=144, avg=66.24, stdev=19.43 00:30:06.408 lat (msec): min=14, max=144, avg=66.27, stdev=19.45 00:30:06.408 clat percentiles (msec): 00:30:06.408 | 1.00th=[ 23], 5.00th=[ 36], 10.00th=[ 46], 20.00th=[ 48], 00:30:06.408 | 30.00th=[ 60], 40.00th=[ 61], 50.00th=[ 69], 60.00th=[ 72], 00:30:06.408 | 70.00th=[ 72], 80.00th=[ 82], 90.00th=[ 96], 95.00th=[ 106], 00:30:06.408 | 99.00th=[ 109], 99.50th=[ 121], 99.90th=[ 132], 99.95th=[ 132], 00:30:06.408 | 99.99th=[ 144] 00:30:06.408 bw ( KiB/s): min= 680, max= 1436, per=4.14%, avg=959.95, stdev=160.44, samples=20 00:30:06.408 iops : min= 170, max= 359, avg=239.95, stdev=40.10, samples=20 00:30:06.408 lat (msec) : 20=0.66%, 50=24.29%, 100=69.35%, 250=5.70% 00:30:06.408 cpu : usr=32.06%, sys=1.33%, ctx=910, majf=0, minf=1072 00:30:06.408 IO depths : 1=0.1%, 2=0.2%, 4=0.9%, 8=82.2%, 16=16.7%, 32=0.0%, >=64=0.0% 00:30:06.408 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.408 complete : 0=0.0%, 4=87.8%, 8=12.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.408 issued rwts: total=2421,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:06.408 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:06.408 filename0: (groupid=0, jobs=1): err= 0: pid=90578: Wed Nov 6 14:34:33 2024 00:30:06.408 read: IOPS=230, BW=924KiB/s (946kB/s)(9264KiB/10031msec) 00:30:06.408 slat (usec): min=3, max=8040, avg=27.96, stdev=254.67 00:30:06.408 clat (msec): min=18, max=143, avg=69.15, stdev=20.09 00:30:06.408 lat (msec): min=18, max=143, avg=69.18, stdev=20.08 00:30:06.408 clat percentiles (msec): 00:30:06.408 | 1.00th=[ 23], 5.00th=[ 37], 10.00th=[ 47], 20.00th=[ 49], 00:30:06.408 | 30.00th=[ 61], 40.00th=[ 64], 50.00th=[ 71], 60.00th=[ 72], 00:30:06.408 | 70.00th=[ 75], 80.00th=[ 85], 90.00th=[ 97], 95.00th=[ 107], 00:30:06.408 | 99.00th=[ 122], 99.50th=[ 122], 99.90th=[ 125], 99.95th=[ 144], 00:30:06.408 | 99.99th=[ 144] 00:30:06.408 bw ( KiB/s): min= 637, max= 1264, per=3.98%, avg=921.45, stdev=171.05, samples=20 00:30:06.408 iops : min= 159, max= 316, avg=230.35, stdev=42.78, samples=20 00:30:06.408 lat (msec) : 20=0.60%, 50=20.08%, 100=72.80%, 250=6.52% 00:30:06.408 cpu : usr=41.58%, sys=1.65%, ctx=1243, majf=0, minf=1073 00:30:06.408 IO depths : 1=0.1%, 2=1.9%, 4=7.4%, 8=75.2%, 16=15.5%, 32=0.0%, >=64=0.0% 00:30:06.408 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.408 complete : 0=0.0%, 4=89.5%, 8=8.9%, 16=1.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.408 issued rwts: total=2316,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:06.408 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:06.408 filename0: (groupid=0, jobs=1): err= 0: pid=90579: Wed Nov 6 14:34:33 2024 00:30:06.408 read: IOPS=260, BW=1041KiB/s (1066kB/s)(10.2MiB/10041msec) 00:30:06.408 slat (usec): min=3, max=5022, avg=24.39, stdev=163.68 00:30:06.408 clat (usec): min=1906, max=124432, avg=61347.78, stdev=22946.76 00:30:06.408 lat (usec): min=1916, max=124440, avg=61372.18, stdev=22944.20 00:30:06.408 clat percentiles (msec): 00:30:06.408 | 1.00th=[ 5], 5.00th=[ 14], 10.00th=[ 38], 20.00th=[ 46], 00:30:06.408 | 30.00th=[ 50], 40.00th=[ 60], 50.00th=[ 64], 60.00th=[ 69], 00:30:06.408 | 70.00th=[ 72], 80.00th=[ 75], 90.00th=[ 92], 95.00th=[ 105], 00:30:06.408 | 99.00th=[ 111], 99.50th=[ 118], 99.90th=[ 125], 99.95th=[ 125], 00:30:06.408 | 99.99th=[ 125] 00:30:06.408 bw ( KiB/s): min= 664, max= 2576, per=4.48%, avg=1038.00, stdev=377.92, samples=20 00:30:06.408 iops : min= 166, max= 644, avg=259.50, stdev=94.48, samples=20 00:30:06.408 lat (msec) : 2=0.08%, 4=0.61%, 10=3.41%, 20=3.18%, 50=24.12% 00:30:06.408 lat (msec) : 100=63.21%, 250=5.40% 00:30:06.408 cpu : usr=42.46%, sys=1.64%, ctx=1341, majf=0, minf=1075 00:30:06.408 IO depths : 1=0.2%, 2=0.5%, 4=1.6%, 8=81.7%, 16=16.0%, 32=0.0%, >=64=0.0% 00:30:06.408 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.408 complete : 0=0.0%, 4=87.6%, 8=12.0%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.408 issued rwts: total=2612,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:06.408 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:06.408 filename0: (groupid=0, jobs=1): err= 0: pid=90580: Wed Nov 6 14:34:33 2024 00:30:06.408 read: IOPS=272, BW=1088KiB/s (1114kB/s)(10.7MiB/10051msec) 00:30:06.408 slat (usec): min=4, max=3121, avg=17.45, stdev=69.15 00:30:06.408 clat (usec): min=1035, max=139241, avg=58634.38, stdev=28592.56 00:30:06.408 lat (usec): min=1043, max=139249, avg=58651.83, stdev=28594.72 00:30:06.408 clat percentiles (usec): 00:30:06.408 | 1.00th=[ 1450], 5.00th=[ 1549], 10.00th=[ 4752], 20.00th=[ 42206], 00:30:06.408 | 30.00th=[ 49021], 40.00th=[ 58983], 50.00th=[ 64226], 60.00th=[ 68682], 00:30:06.408 | 70.00th=[ 71828], 80.00th=[ 77071], 90.00th=[ 94897], 95.00th=[102237], 00:30:06.408 | 99.00th=[113771], 99.50th=[114820], 99.90th=[123208], 99.95th=[135267], 00:30:06.408 | 99.99th=[139461] 00:30:06.408 bw ( KiB/s): min= 648, max= 4224, per=4.70%, avg=1089.60, stdev=748.74, samples=20 00:30:06.408 iops : min= 162, max= 1056, avg=272.35, stdev=187.21, samples=20 00:30:06.408 lat (msec) : 2=5.93%, 4=2.34%, 10=4.54%, 20=2.34%, 50=16.28% 00:30:06.408 lat (msec) : 100=62.55%, 250=6.04% 00:30:06.408 cpu : usr=42.74%, sys=1.93%, ctx=1582, majf=0, minf=1075 00:30:06.408 IO depths : 1=0.7%, 2=2.3%, 4=6.7%, 8=75.3%, 16=15.1%, 32=0.0%, >=64=0.0% 00:30:06.408 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.408 complete : 0=0.0%, 4=89.3%, 8=9.2%, 16=1.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.408 issued rwts: total=2734,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:06.408 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:06.408 filename0: (groupid=0, jobs=1): err= 0: pid=90581: Wed Nov 6 14:34:33 2024 00:30:06.409 read: IOPS=247, BW=991KiB/s (1014kB/s)(9928KiB/10021msec) 00:30:06.409 slat (usec): min=3, max=8054, avg=33.74, stdev=321.92 00:30:06.409 clat (msec): min=11, max=120, avg=64.44, stdev=18.87 00:30:06.409 lat (msec): min=11, max=120, avg=64.47, stdev=18.86 00:30:06.409 clat percentiles (msec): 00:30:06.409 | 1.00th=[ 24], 5.00th=[ 37], 10.00th=[ 43], 20.00th=[ 48], 00:30:06.409 | 30.00th=[ 50], 40.00th=[ 61], 50.00th=[ 63], 60.00th=[ 71], 00:30:06.409 | 70.00th=[ 72], 80.00th=[ 74], 90.00th=[ 96], 95.00th=[ 99], 00:30:06.409 | 99.00th=[ 109], 99.50th=[ 121], 99.90th=[ 121], 99.95th=[ 121], 00:30:06.409 | 99.99th=[ 121] 00:30:06.409 bw ( KiB/s): min= 688, max= 1288, per=4.22%, avg=976.42, stdev=137.31, samples=19 00:30:06.409 iops : min= 172, max= 322, avg=244.11, stdev=34.33, samples=19 00:30:06.409 lat (msec) : 20=0.32%, 50=30.50%, 100=65.03%, 250=4.15% 00:30:06.409 cpu : usr=32.03%, sys=1.20%, ctx=903, majf=0, minf=1071 00:30:06.409 IO depths : 1=0.1%, 2=0.1%, 4=0.4%, 8=83.2%, 16=16.2%, 32=0.0%, >=64=0.0% 00:30:06.409 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.409 complete : 0=0.0%, 4=87.2%, 8=12.7%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.409 issued rwts: total=2482,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:06.409 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:06.409 filename1: (groupid=0, jobs=1): err= 0: pid=90582: Wed Nov 6 14:34:33 2024 00:30:06.409 read: IOPS=238, BW=956KiB/s (979kB/s)(9616KiB/10060msec) 00:30:06.409 slat (usec): min=3, max=8030, avg=34.67, stdev=365.27 00:30:06.409 clat (msec): min=4, max=155, avg=66.73, stdev=23.95 00:30:06.409 lat (msec): min=4, max=155, avg=66.76, stdev=23.95 00:30:06.409 clat percentiles (msec): 00:30:06.409 | 1.00th=[ 6], 5.00th=[ 16], 10.00th=[ 39], 20.00th=[ 50], 00:30:06.409 | 30.00th=[ 61], 40.00th=[ 63], 50.00th=[ 71], 60.00th=[ 72], 00:30:06.409 | 70.00th=[ 72], 80.00th=[ 84], 90.00th=[ 96], 95.00th=[ 108], 00:30:06.409 | 99.00th=[ 144], 99.50th=[ 144], 99.90th=[ 157], 99.95th=[ 157], 00:30:06.409 | 99.99th=[ 157] 00:30:06.409 bw ( KiB/s): min= 592, max= 2160, per=4.12%, avg=955.10, stdev=310.45, samples=20 00:30:06.409 iops : min= 148, max= 540, avg=238.75, stdev=77.63, samples=20 00:30:06.409 lat (msec) : 10=3.83%, 20=3.24%, 50=13.23%, 100=73.42%, 250=6.28% 00:30:06.409 cpu : usr=32.15%, sys=1.30%, ctx=929, majf=0, minf=1072 00:30:06.409 IO depths : 1=0.1%, 2=1.5%, 4=5.6%, 8=76.7%, 16=16.1%, 32=0.0%, >=64=0.0% 00:30:06.409 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.409 complete : 0=0.0%, 4=89.4%, 8=9.4%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.409 issued rwts: total=2404,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:06.409 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:06.409 filename1: (groupid=0, jobs=1): err= 0: pid=90583: Wed Nov 6 14:34:33 2024 00:30:06.409 read: IOPS=241, BW=966KiB/s (989kB/s)(9680KiB/10020msec) 00:30:06.409 slat (usec): min=3, max=12031, avg=44.96, stdev=407.63 00:30:06.409 clat (msec): min=22, max=145, avg=66.02, stdev=18.79 00:30:06.409 lat (msec): min=22, max=145, avg=66.07, stdev=18.78 00:30:06.409 clat percentiles (msec): 00:30:06.409 | 1.00th=[ 32], 5.00th=[ 41], 10.00th=[ 44], 20.00th=[ 48], 00:30:06.409 | 30.00th=[ 54], 40.00th=[ 62], 50.00th=[ 66], 60.00th=[ 71], 00:30:06.409 | 70.00th=[ 72], 80.00th=[ 80], 90.00th=[ 95], 95.00th=[ 104], 00:30:06.409 | 99.00th=[ 115], 99.50th=[ 116], 99.90th=[ 124], 99.95th=[ 146], 00:30:06.409 | 99.99th=[ 146] 00:30:06.409 bw ( KiB/s): min= 720, max= 1128, per=4.10%, avg=950.21, stdev=141.05, samples=19 00:30:06.409 iops : min= 180, max= 282, avg=237.53, stdev=35.29, samples=19 00:30:06.409 lat (msec) : 50=26.94%, 100=67.36%, 250=5.70% 00:30:06.409 cpu : usr=42.52%, sys=1.74%, ctx=1186, majf=0, minf=1074 00:30:06.409 IO depths : 1=0.1%, 2=0.8%, 4=2.8%, 8=80.8%, 16=15.5%, 32=0.0%, >=64=0.0% 00:30:06.409 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.409 complete : 0=0.0%, 4=87.6%, 8=11.8%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.409 issued rwts: total=2420,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:06.409 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:06.409 filename1: (groupid=0, jobs=1): err= 0: pid=90584: Wed Nov 6 14:34:33 2024 00:30:06.409 read: IOPS=239, BW=960KiB/s (983kB/s)(9620KiB/10022msec) 00:30:06.409 slat (usec): min=3, max=8040, avg=34.59, stdev=280.93 00:30:06.409 clat (msec): min=23, max=120, avg=66.49, stdev=17.77 00:30:06.409 lat (msec): min=23, max=120, avg=66.53, stdev=17.77 00:30:06.409 clat percentiles (msec): 00:30:06.409 | 1.00th=[ 36], 5.00th=[ 42], 10.00th=[ 45], 20.00th=[ 48], 00:30:06.409 | 30.00th=[ 56], 40.00th=[ 63], 50.00th=[ 67], 60.00th=[ 70], 00:30:06.409 | 70.00th=[ 72], 80.00th=[ 79], 90.00th=[ 95], 95.00th=[ 103], 00:30:06.409 | 99.00th=[ 113], 99.50th=[ 116], 99.90th=[ 122], 99.95th=[ 122], 00:30:06.409 | 99.99th=[ 122] 00:30:06.409 bw ( KiB/s): min= 712, max= 1072, per=4.10%, avg=949.53, stdev=107.67, samples=19 00:30:06.409 iops : min= 178, max= 268, avg=237.37, stdev=26.92, samples=19 00:30:06.409 lat (msec) : 50=22.95%, 100=71.35%, 250=5.70% 00:30:06.409 cpu : usr=41.61%, sys=2.00%, ctx=1399, majf=0, minf=1072 00:30:06.409 IO depths : 1=0.1%, 2=0.5%, 4=2.0%, 8=81.5%, 16=16.0%, 32=0.0%, >=64=0.0% 00:30:06.409 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.409 complete : 0=0.0%, 4=87.7%, 8=11.9%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.409 issued rwts: total=2405,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:06.409 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:06.409 filename1: (groupid=0, jobs=1): err= 0: pid=90585: Wed Nov 6 14:34:33 2024 00:30:06.409 read: IOPS=237, BW=949KiB/s (972kB/s)(9512KiB/10023msec) 00:30:06.409 slat (usec): min=3, max=8279, avg=34.02, stdev=257.24 00:30:06.409 clat (msec): min=23, max=142, avg=67.28, stdev=20.57 00:30:06.409 lat (msec): min=23, max=142, avg=67.31, stdev=20.58 00:30:06.409 clat percentiles (msec): 00:30:06.409 | 1.00th=[ 33], 5.00th=[ 41], 10.00th=[ 44], 20.00th=[ 48], 00:30:06.409 | 30.00th=[ 53], 40.00th=[ 62], 50.00th=[ 68], 60.00th=[ 71], 00:30:06.409 | 70.00th=[ 73], 80.00th=[ 82], 90.00th=[ 97], 95.00th=[ 106], 00:30:06.409 | 99.00th=[ 121], 99.50th=[ 133], 99.90th=[ 142], 99.95th=[ 142], 00:30:06.409 | 99.99th=[ 142] 00:30:06.409 bw ( KiB/s): min= 624, max= 1128, per=4.04%, avg=935.47, stdev=166.19, samples=19 00:30:06.409 iops : min= 156, max= 282, avg=233.84, stdev=41.57, samples=19 00:30:06.409 lat (msec) : 50=28.09%, 100=64.68%, 250=7.23% 00:30:06.409 cpu : usr=42.43%, sys=1.90%, ctx=1119, majf=0, minf=1074 00:30:06.409 IO depths : 1=0.1%, 2=1.3%, 4=5.3%, 8=78.0%, 16=15.2%, 32=0.0%, >=64=0.0% 00:30:06.409 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.409 complete : 0=0.0%, 4=88.4%, 8=10.4%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.409 issued rwts: total=2378,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:06.409 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:06.409 filename1: (groupid=0, jobs=1): err= 0: pid=90586: Wed Nov 6 14:34:33 2024 00:30:06.409 read: IOPS=245, BW=984KiB/s (1007kB/s)(9860KiB/10025msec) 00:30:06.409 slat (usec): min=3, max=8023, avg=24.31, stdev=161.83 00:30:06.409 clat (msec): min=20, max=129, avg=64.89, stdev=18.23 00:30:06.409 lat (msec): min=20, max=129, avg=64.92, stdev=18.23 00:30:06.409 clat percentiles (msec): 00:30:06.409 | 1.00th=[ 33], 5.00th=[ 40], 10.00th=[ 44], 20.00th=[ 48], 00:30:06.409 | 30.00th=[ 54], 40.00th=[ 60], 50.00th=[ 65], 60.00th=[ 69], 00:30:06.409 | 70.00th=[ 72], 80.00th=[ 78], 90.00th=[ 95], 95.00th=[ 102], 00:30:06.409 | 99.00th=[ 109], 99.50th=[ 115], 99.90th=[ 130], 99.95th=[ 130], 00:30:06.409 | 99.99th=[ 130] 00:30:06.409 bw ( KiB/s): min= 736, max= 1128, per=4.19%, avg=970.53, stdev=130.56, samples=19 00:30:06.409 iops : min= 184, max= 282, avg=242.63, stdev=32.64, samples=19 00:30:06.409 lat (msec) : 50=27.10%, 100=67.79%, 250=5.11% 00:30:06.409 cpu : usr=40.63%, sys=1.61%, ctx=1215, majf=0, minf=1074 00:30:06.409 IO depths : 1=0.1%, 2=0.5%, 4=1.9%, 8=81.8%, 16=15.8%, 32=0.0%, >=64=0.0% 00:30:06.409 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.409 complete : 0=0.0%, 4=87.5%, 8=12.1%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.409 issued rwts: total=2465,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:06.409 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:06.409 filename1: (groupid=0, jobs=1): err= 0: pid=90587: Wed Nov 6 14:34:33 2024 00:30:06.409 read: IOPS=232, BW=929KiB/s (952kB/s)(9312KiB/10020msec) 00:30:06.409 slat (usec): min=3, max=8029, avg=27.39, stdev=204.15 00:30:06.409 clat (msec): min=19, max=135, avg=68.72, stdev=19.33 00:30:06.409 lat (msec): min=19, max=135, avg=68.75, stdev=19.32 00:30:06.409 clat percentiles (msec): 00:30:06.409 | 1.00th=[ 36], 5.00th=[ 41], 10.00th=[ 48], 20.00th=[ 49], 00:30:06.409 | 30.00th=[ 61], 40.00th=[ 63], 50.00th=[ 70], 60.00th=[ 72], 00:30:06.409 | 70.00th=[ 72], 80.00th=[ 84], 90.00th=[ 96], 95.00th=[ 108], 00:30:06.409 | 99.00th=[ 121], 99.50th=[ 136], 99.90th=[ 136], 99.95th=[ 136], 00:30:06.409 | 99.99th=[ 136] 00:30:06.409 bw ( KiB/s): min= 528, max= 1080, per=3.95%, avg=915.79, stdev=141.57, samples=19 00:30:06.409 iops : min= 132, max= 270, avg=228.95, stdev=35.39, samples=19 00:30:06.409 lat (msec) : 20=0.26%, 50=21.43%, 100=71.48%, 250=6.83% 00:30:06.409 cpu : usr=32.01%, sys=1.23%, ctx=913, majf=0, minf=1073 00:30:06.409 IO depths : 1=0.1%, 2=0.8%, 4=3.2%, 8=79.8%, 16=16.2%, 32=0.0%, >=64=0.0% 00:30:06.409 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.409 complete : 0=0.0%, 4=88.3%, 8=11.0%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.409 issued rwts: total=2328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:06.409 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:06.409 filename1: (groupid=0, jobs=1): err= 0: pid=90588: Wed Nov 6 14:34:33 2024 00:30:06.409 read: IOPS=240, BW=963KiB/s (986kB/s)(9660KiB/10032msec) 00:30:06.409 slat (usec): min=3, max=8029, avg=23.64, stdev=230.75 00:30:06.409 clat (msec): min=21, max=122, avg=66.29, stdev=18.38 00:30:06.409 lat (msec): min=21, max=122, avg=66.31, stdev=18.38 00:30:06.409 clat percentiles (msec): 00:30:06.409 | 1.00th=[ 23], 5.00th=[ 40], 10.00th=[ 45], 20.00th=[ 48], 00:30:06.409 | 30.00th=[ 59], 40.00th=[ 63], 50.00th=[ 68], 60.00th=[ 71], 00:30:06.409 | 70.00th=[ 72], 80.00th=[ 79], 90.00th=[ 94], 95.00th=[ 104], 00:30:06.409 | 99.00th=[ 111], 99.50th=[ 117], 99.90th=[ 122], 99.95th=[ 122], 00:30:06.409 | 99.99th=[ 123] 00:30:06.409 bw ( KiB/s): min= 741, max= 1320, per=4.15%, avg=962.25, stdev=134.85, samples=20 00:30:06.409 iops : min= 185, max= 330, avg=240.55, stdev=33.73, samples=20 00:30:06.409 lat (msec) : 50=22.77%, 100=71.80%, 250=5.42% 00:30:06.409 cpu : usr=36.90%, sys=1.60%, ctx=1127, majf=0, minf=1075 00:30:06.409 IO depths : 1=0.1%, 2=0.4%, 4=1.3%, 8=81.8%, 16=16.4%, 32=0.0%, >=64=0.0% 00:30:06.409 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.410 complete : 0=0.0%, 4=87.8%, 8=11.9%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.410 issued rwts: total=2415,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:06.410 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:06.410 filename1: (groupid=0, jobs=1): err= 0: pid=90589: Wed Nov 6 14:34:33 2024 00:30:06.410 read: IOPS=242, BW=970KiB/s (993kB/s)(9716KiB/10015msec) 00:30:06.410 slat (usec): min=3, max=9051, avg=44.75, stdev=404.47 00:30:06.410 clat (msec): min=18, max=144, avg=65.77, stdev=20.09 00:30:06.410 lat (msec): min=18, max=144, avg=65.82, stdev=20.07 00:30:06.410 clat percentiles (msec): 00:30:06.410 | 1.00th=[ 30], 5.00th=[ 41], 10.00th=[ 44], 20.00th=[ 48], 00:30:06.410 | 30.00th=[ 52], 40.00th=[ 61], 50.00th=[ 65], 60.00th=[ 70], 00:30:06.410 | 70.00th=[ 72], 80.00th=[ 80], 90.00th=[ 96], 95.00th=[ 107], 00:30:06.410 | 99.00th=[ 121], 99.50th=[ 132], 99.90th=[ 132], 99.95th=[ 144], 00:30:06.410 | 99.99th=[ 144] 00:30:06.410 bw ( KiB/s): min= 624, max= 1144, per=4.12%, avg=954.11, stdev=158.16, samples=19 00:30:06.410 iops : min= 156, max= 286, avg=238.53, stdev=39.54, samples=19 00:30:06.410 lat (msec) : 20=0.16%, 50=28.00%, 100=65.42%, 250=6.42% 00:30:06.410 cpu : usr=37.37%, sys=1.64%, ctx=1231, majf=0, minf=1072 00:30:06.410 IO depths : 1=0.1%, 2=1.0%, 4=3.9%, 8=79.7%, 16=15.3%, 32=0.0%, >=64=0.0% 00:30:06.410 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.410 complete : 0=0.0%, 4=87.9%, 8=11.3%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.410 issued rwts: total=2429,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:06.410 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:06.410 filename2: (groupid=0, jobs=1): err= 0: pid=90590: Wed Nov 6 14:34:33 2024 00:30:06.410 read: IOPS=241, BW=966KiB/s (989kB/s)(9676KiB/10019msec) 00:30:06.410 slat (usec): min=3, max=8064, avg=30.35, stdev=283.01 00:30:06.410 clat (msec): min=23, max=144, avg=66.10, stdev=19.71 00:30:06.410 lat (msec): min=23, max=144, avg=66.13, stdev=19.71 00:30:06.410 clat percentiles (msec): 00:30:06.410 | 1.00th=[ 36], 5.00th=[ 39], 10.00th=[ 46], 20.00th=[ 48], 00:30:06.410 | 30.00th=[ 51], 40.00th=[ 61], 50.00th=[ 63], 60.00th=[ 72], 00:30:06.410 | 70.00th=[ 72], 80.00th=[ 82], 90.00th=[ 96], 95.00th=[ 107], 00:30:06.410 | 99.00th=[ 121], 99.50th=[ 122], 99.90th=[ 133], 99.95th=[ 144], 00:30:06.410 | 99.99th=[ 144] 00:30:06.410 bw ( KiB/s): min= 640, max= 1096, per=4.11%, avg=952.95, stdev=154.20, samples=19 00:30:06.410 iops : min= 160, max= 274, avg=238.21, stdev=38.54, samples=19 00:30:06.410 lat (msec) : 50=29.35%, 100=64.24%, 250=6.41% 00:30:06.410 cpu : usr=31.69%, sys=1.42%, ctx=888, majf=0, minf=1073 00:30:06.410 IO depths : 1=0.1%, 2=1.1%, 4=4.4%, 8=79.0%, 16=15.4%, 32=0.0%, >=64=0.0% 00:30:06.410 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.410 complete : 0=0.0%, 4=88.1%, 8=10.9%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.410 issued rwts: total=2419,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:06.410 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:06.410 filename2: (groupid=0, jobs=1): err= 0: pid=90591: Wed Nov 6 14:34:33 2024 00:30:06.410 read: IOPS=259, BW=1037KiB/s (1062kB/s)(10.1MiB/10011msec) 00:30:06.410 slat (usec): min=3, max=7143, avg=30.43, stdev=215.80 00:30:06.410 clat (usec): min=713, max=132015, avg=61565.77, stdev=21708.21 00:30:06.410 lat (usec): min=733, max=132030, avg=61596.20, stdev=21700.65 00:30:06.410 clat percentiles (usec): 00:30:06.410 | 1.00th=[ 1713], 5.00th=[ 32113], 10.00th=[ 40109], 20.00th=[ 45351], 00:30:06.410 | 30.00th=[ 47973], 40.00th=[ 55837], 50.00th=[ 63177], 60.00th=[ 67634], 00:30:06.410 | 70.00th=[ 70779], 80.00th=[ 74974], 90.00th=[ 94897], 95.00th=[ 99091], 00:30:06.410 | 99.00th=[110625], 99.50th=[114820], 99.90th=[120062], 99.95th=[120062], 00:30:06.410 | 99.99th=[131597] 00:30:06.410 bw ( KiB/s): min= 688, max= 1184, per=4.25%, avg=983.58, stdev=145.78, samples=19 00:30:06.410 iops : min= 172, max= 296, avg=245.89, stdev=36.44, samples=19 00:30:06.410 lat (usec) : 750=0.04% 00:30:06.410 lat (msec) : 2=1.54%, 4=1.12%, 10=0.77%, 20=0.35%, 50=30.52% 00:30:06.410 lat (msec) : 100=61.16%, 250=4.51% 00:30:06.410 cpu : usr=40.78%, sys=1.73%, ctx=1350, majf=0, minf=1074 00:30:06.410 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.5%, 16=15.9%, 32=0.0%, >=64=0.0% 00:30:06.410 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.410 complete : 0=0.0%, 4=87.0%, 8=12.9%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.410 issued rwts: total=2595,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:06.410 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:06.410 filename2: (groupid=0, jobs=1): err= 0: pid=90592: Wed Nov 6 14:34:33 2024 00:30:06.410 read: IOPS=232, BW=929KiB/s (951kB/s)(9316KiB/10031msec) 00:30:06.410 slat (usec): min=3, max=8040, avg=26.56, stdev=235.21 00:30:06.410 clat (msec): min=8, max=144, avg=68.72, stdev=21.71 00:30:06.410 lat (msec): min=9, max=144, avg=68.74, stdev=21.71 00:30:06.410 clat percentiles (msec): 00:30:06.410 | 1.00th=[ 20], 5.00th=[ 36], 10.00th=[ 45], 20.00th=[ 48], 00:30:06.410 | 30.00th=[ 61], 40.00th=[ 64], 50.00th=[ 71], 60.00th=[ 72], 00:30:06.410 | 70.00th=[ 74], 80.00th=[ 85], 90.00th=[ 97], 95.00th=[ 107], 00:30:06.410 | 99.00th=[ 130], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 144], 00:30:06.410 | 99.99th=[ 144] 00:30:06.410 bw ( KiB/s): min= 525, max= 1448, per=4.00%, avg=927.45, stdev=201.31, samples=20 00:30:06.410 iops : min= 131, max= 362, avg=231.85, stdev=50.35, samples=20 00:30:06.410 lat (msec) : 10=0.34%, 20=0.99%, 50=21.08%, 100=69.99%, 250=7.60% 00:30:06.410 cpu : usr=37.34%, sys=1.61%, ctx=1252, majf=0, minf=1074 00:30:06.410 IO depths : 1=0.1%, 2=1.7%, 4=7.0%, 8=75.7%, 16=15.6%, 32=0.0%, >=64=0.0% 00:30:06.410 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.410 complete : 0=0.0%, 4=89.3%, 8=9.1%, 16=1.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.410 issued rwts: total=2329,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:06.410 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:06.410 filename2: (groupid=0, jobs=1): err= 0: pid=90593: Wed Nov 6 14:34:33 2024 00:30:06.410 read: IOPS=240, BW=961KiB/s (984kB/s)(9640KiB/10036msec) 00:30:06.410 slat (usec): min=4, max=8051, avg=43.46, stdev=417.69 00:30:06.410 clat (msec): min=24, max=132, avg=66.41, stdev=17.85 00:30:06.410 lat (msec): min=24, max=132, avg=66.45, stdev=17.85 00:30:06.410 clat percentiles (msec): 00:30:06.410 | 1.00th=[ 36], 5.00th=[ 41], 10.00th=[ 46], 20.00th=[ 48], 00:30:06.410 | 30.00th=[ 57], 40.00th=[ 61], 50.00th=[ 67], 60.00th=[ 71], 00:30:06.410 | 70.00th=[ 72], 80.00th=[ 81], 90.00th=[ 94], 95.00th=[ 103], 00:30:06.410 | 99.00th=[ 109], 99.50th=[ 112], 99.90th=[ 121], 99.95th=[ 131], 00:30:06.410 | 99.99th=[ 133] 00:30:06.410 bw ( KiB/s): min= 712, max= 1144, per=4.14%, avg=959.80, stdev=120.05, samples=20 00:30:06.410 iops : min= 178, max= 286, avg=239.90, stdev=30.08, samples=20 00:30:06.410 lat (msec) : 50=23.40%, 100=70.50%, 250=6.10% 00:30:06.410 cpu : usr=36.51%, sys=1.37%, ctx=1013, majf=0, minf=1074 00:30:06.410 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=82.9%, 16=16.4%, 32=0.0%, >=64=0.0% 00:30:06.410 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.410 complete : 0=0.0%, 4=87.4%, 8=12.5%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.410 issued rwts: total=2410,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:06.410 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:06.410 filename2: (groupid=0, jobs=1): err= 0: pid=90594: Wed Nov 6 14:34:33 2024 00:30:06.410 read: IOPS=237, BW=949KiB/s (971kB/s)(9520KiB/10036msec) 00:30:06.410 slat (usec): min=3, max=8054, avg=32.42, stdev=297.40 00:30:06.410 clat (msec): min=32, max=156, avg=67.28, stdev=19.81 00:30:06.410 lat (msec): min=32, max=156, avg=67.31, stdev=19.81 00:30:06.410 clat percentiles (msec): 00:30:06.410 | 1.00th=[ 36], 5.00th=[ 41], 10.00th=[ 47], 20.00th=[ 48], 00:30:06.410 | 30.00th=[ 59], 40.00th=[ 61], 50.00th=[ 69], 60.00th=[ 72], 00:30:06.410 | 70.00th=[ 72], 80.00th=[ 75], 90.00th=[ 96], 95.00th=[ 108], 00:30:06.410 | 99.00th=[ 132], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 157], 00:30:06.410 | 99.99th=[ 157] 00:30:06.410 bw ( KiB/s): min= 528, max= 1168, per=4.09%, avg=947.40, stdev=153.12, samples=20 00:30:06.410 iops : min= 132, max= 292, avg=236.80, stdev=38.31, samples=20 00:30:06.410 lat (msec) : 50=26.81%, 100=66.76%, 250=6.43% 00:30:06.410 cpu : usr=32.20%, sys=1.06%, ctx=921, majf=0, minf=1074 00:30:06.410 IO depths : 1=0.1%, 2=1.0%, 4=3.5%, 8=79.7%, 16=15.7%, 32=0.0%, >=64=0.0% 00:30:06.410 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.410 complete : 0=0.0%, 4=88.1%, 8=11.1%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.410 issued rwts: total=2380,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:06.410 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:06.410 filename2: (groupid=0, jobs=1): err= 0: pid=90595: Wed Nov 6 14:34:33 2024 00:30:06.410 read: IOPS=221, BW=885KiB/s (906kB/s)(8880KiB/10036msec) 00:30:06.410 slat (usec): min=3, max=8055, avg=35.06, stdev=360.55 00:30:06.410 clat (msec): min=18, max=154, avg=72.06, stdev=21.95 00:30:06.410 lat (msec): min=18, max=154, avg=72.10, stdev=21.97 00:30:06.410 clat percentiles (msec): 00:30:06.410 | 1.00th=[ 22], 5.00th=[ 36], 10.00th=[ 48], 20.00th=[ 59], 00:30:06.410 | 30.00th=[ 62], 40.00th=[ 66], 50.00th=[ 72], 60.00th=[ 72], 00:30:06.410 | 70.00th=[ 81], 80.00th=[ 88], 90.00th=[ 104], 95.00th=[ 111], 00:30:06.410 | 99.00th=[ 134], 99.50th=[ 140], 99.90th=[ 155], 99.95th=[ 155], 00:30:06.410 | 99.99th=[ 155] 00:30:06.410 bw ( KiB/s): min= 512, max= 1282, per=3.81%, avg=882.05, stdev=181.08, samples=20 00:30:06.410 iops : min= 128, max= 320, avg=220.40, stdev=45.23, samples=20 00:30:06.410 lat (msec) : 20=0.63%, 50=14.95%, 100=72.88%, 250=11.53% 00:30:06.410 cpu : usr=36.16%, sys=1.44%, ctx=1033, majf=0, minf=1073 00:30:06.410 IO depths : 1=0.1%, 2=3.1%, 4=12.2%, 8=69.8%, 16=14.9%, 32=0.0%, >=64=0.0% 00:30:06.410 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.410 complete : 0=0.0%, 4=90.9%, 8=6.4%, 16=2.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.410 issued rwts: total=2220,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:06.410 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:06.410 filename2: (groupid=0, jobs=1): err= 0: pid=90596: Wed Nov 6 14:34:33 2024 00:30:06.410 read: IOPS=238, BW=952KiB/s (975kB/s)(9552KiB/10033msec) 00:30:06.410 slat (usec): min=3, max=9031, avg=38.67, stdev=393.60 00:30:06.410 clat (msec): min=21, max=132, avg=67.00, stdev=20.31 00:30:06.410 lat (msec): min=21, max=132, avg=67.04, stdev=20.32 00:30:06.410 clat percentiles (msec): 00:30:06.410 | 1.00th=[ 22], 5.00th=[ 37], 10.00th=[ 46], 20.00th=[ 48], 00:30:06.410 | 30.00th=[ 58], 40.00th=[ 62], 50.00th=[ 68], 60.00th=[ 72], 00:30:06.410 | 70.00th=[ 72], 80.00th=[ 81], 90.00th=[ 97], 95.00th=[ 107], 00:30:06.410 | 99.00th=[ 121], 99.50th=[ 122], 99.90th=[ 132], 99.95th=[ 133], 00:30:06.410 | 99.99th=[ 133] 00:30:06.410 bw ( KiB/s): min= 616, max= 1392, per=4.10%, avg=950.90, stdev=185.07, samples=20 00:30:06.411 iops : min= 154, max= 348, avg=237.70, stdev=46.31, samples=20 00:30:06.411 lat (msec) : 50=24.79%, 100=67.59%, 250=7.62% 00:30:06.411 cpu : usr=37.97%, sys=1.43%, ctx=1081, majf=0, minf=1072 00:30:06.411 IO depths : 1=0.1%, 2=1.3%, 4=5.2%, 8=77.7%, 16=15.7%, 32=0.0%, >=64=0.0% 00:30:06.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.411 complete : 0=0.0%, 4=88.7%, 8=10.1%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.411 issued rwts: total=2388,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:06.411 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:06.411 filename2: (groupid=0, jobs=1): err= 0: pid=90597: Wed Nov 6 14:34:33 2024 00:30:06.411 read: IOPS=243, BW=974KiB/s (997kB/s)(9740KiB/10003msec) 00:30:06.411 slat (usec): min=3, max=8064, avg=31.90, stdev=270.58 00:30:06.411 clat (msec): min=5, max=144, avg=65.58, stdev=20.46 00:30:06.411 lat (msec): min=5, max=144, avg=65.61, stdev=20.46 00:30:06.411 clat percentiles (msec): 00:30:06.411 | 1.00th=[ 21], 5.00th=[ 40], 10.00th=[ 43], 20.00th=[ 48], 00:30:06.411 | 30.00th=[ 51], 40.00th=[ 61], 50.00th=[ 65], 60.00th=[ 71], 00:30:06.411 | 70.00th=[ 72], 80.00th=[ 81], 90.00th=[ 96], 95.00th=[ 103], 00:30:06.411 | 99.00th=[ 118], 99.50th=[ 133], 99.90th=[ 144], 99.95th=[ 144], 00:30:06.411 | 99.99th=[ 144] 00:30:06.411 bw ( KiB/s): min= 640, max= 1160, per=4.08%, avg=945.68, stdev=154.73, samples=19 00:30:06.411 iops : min= 160, max= 290, avg=236.42, stdev=38.68, samples=19 00:30:06.411 lat (msec) : 10=0.94%, 20=0.12%, 50=28.79%, 100=64.60%, 250=5.54% 00:30:06.411 cpu : usr=37.02%, sys=1.54%, ctx=1061, majf=0, minf=1072 00:30:06.411 IO depths : 1=0.1%, 2=1.2%, 4=4.7%, 8=78.7%, 16=15.3%, 32=0.0%, >=64=0.0% 00:30:06.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.411 complete : 0=0.0%, 4=88.3%, 8=10.7%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.411 issued rwts: total=2435,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:06.411 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:06.411 00:30:06.411 Run status group 0 (all jobs): 00:30:06.411 READ: bw=22.6MiB/s (23.7MB/s), 885KiB/s-1088KiB/s (906kB/s-1114kB/s), io=227MiB (239MB), run=10003-10060msec 00:30:06.978 ----------------------------------------------------- 00:30:06.978 Suppressions used: 00:30:06.978 count bytes template 00:30:06.978 45 402 /usr/src/fio/parse.c 00:30:06.978 1 8 libtcmalloc_minimal.so 00:30:06.978 1 904 libcrypto.so 00:30:06.978 ----------------------------------------------------- 00:30:06.978 00:30:06.978 14:34:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:30:06.978 14:34:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:30:06.978 14:34:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:06.978 14:34:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:06.978 14:34:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:30:06.978 14:34:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:06.978 14:34:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.978 14:34:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:06.978 14:34:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.978 14:34:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:06.978 14:34:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.978 14:34:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:06.978 14:34:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.978 14:34:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:06.978 14:34:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:06.979 14:34:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:30:06.979 14:34:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:06.979 14:34:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.979 14:34:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:06.979 14:34:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.979 14:34:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:06.979 14:34:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.979 14:34:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:06.979 14:34:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.979 14:34:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:06.979 14:34:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:30:06.979 14:34:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:30:06.979 14:34:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:06.979 14:34:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.979 14:34:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:06.979 14:34:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.979 14:34:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:30:06.979 14:34:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.979 14:34:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:07.238 14:34:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:07.238 14:34:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:30:07.238 14:34:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:30:07.238 14:34:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:30:07.238 14:34:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:30:07.239 14:34:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:30:07.239 14:34:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:30:07.239 14:34:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:30:07.239 14:34:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:07.239 14:34:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:07.239 14:34:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:07.239 14:34:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:07.239 14:34:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:07.239 14:34:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:07.239 14:34:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:07.239 bdev_null0 00:30:07.239 14:34:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:07.239 14:34:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:07.239 14:34:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:07.239 14:34:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:07.239 14:34:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:07.239 14:34:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:07.239 14:34:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:07.239 14:34:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:07.239 14:34:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:07.239 14:34:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:30:07.239 14:34:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:07.239 14:34:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:07.239 [2024-11-06 14:34:34.658706] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:30:07.239 14:34:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:07.239 14:34:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:07.239 14:34:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:30:07.239 14:34:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:30:07.239 14:34:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:30:07.239 14:34:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:07.239 14:34:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:07.239 bdev_null1 00:30:07.239 14:34:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:07.239 14:34:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:07.239 14:34:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:07.239 14:34:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:07.239 14:34:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:07.239 14:34:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:07.239 14:34:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:07.239 14:34:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:07.239 14:34:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:07.239 14:34:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:30:07.239 14:34:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:07.239 14:34:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:07.239 14:34:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:07.239 14:34:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:30:07.239 14:34:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:30:07.239 14:34:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:30:07.239 14:34:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:07.239 14:34:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:07.239 14:34:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:07.239 14:34:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:07.239 14:34:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:07.239 14:34:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:30:07.239 14:34:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:30:07.239 14:34:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:07.239 14:34:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:30:07.239 14:34:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:30:07.239 14:34:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:07.239 14:34:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:30:07.239 14:34:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:30:07.239 14:34:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:07.239 14:34:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:30:07.239 14:34:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:07.239 { 00:30:07.239 "params": { 00:30:07.239 "name": "Nvme$subsystem", 00:30:07.239 "trtype": "$TEST_TRANSPORT", 00:30:07.239 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:07.239 "adrfam": "ipv4", 00:30:07.239 "trsvcid": "$NVMF_PORT", 00:30:07.239 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:07.239 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:07.239 "hdgst": ${hdgst:-false}, 00:30:07.239 "ddgst": ${ddgst:-false} 00:30:07.239 }, 00:30:07.239 "method": "bdev_nvme_attach_controller" 00:30:07.239 } 00:30:07.239 EOF 00:30:07.239 )") 00:30:07.239 14:34:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:07.239 14:34:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:07.239 14:34:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:07.239 14:34:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:07.239 14:34:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:30:07.239 14:34:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:30:07.239 14:34:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:30:07.239 14:34:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:07.239 14:34:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:07.239 14:34:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:07.239 14:34:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:07.239 { 00:30:07.239 "params": { 00:30:07.239 "name": "Nvme$subsystem", 00:30:07.239 "trtype": "$TEST_TRANSPORT", 00:30:07.239 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:07.239 "adrfam": "ipv4", 00:30:07.239 "trsvcid": "$NVMF_PORT", 00:30:07.239 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:07.239 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:07.239 "hdgst": ${hdgst:-false}, 00:30:07.239 "ddgst": ${ddgst:-false} 00:30:07.239 }, 00:30:07.239 "method": "bdev_nvme_attach_controller" 00:30:07.239 } 00:30:07.239 EOF 00:30:07.239 )") 00:30:07.239 14:34:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:30:07.239 14:34:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:30:07.239 14:34:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:30:07.239 14:34:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:07.239 "params": { 00:30:07.239 "name": "Nvme0", 00:30:07.239 "trtype": "tcp", 00:30:07.239 "traddr": "10.0.0.3", 00:30:07.239 "adrfam": "ipv4", 00:30:07.239 "trsvcid": "4420", 00:30:07.239 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:07.239 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:07.239 "hdgst": false, 00:30:07.239 "ddgst": false 00:30:07.239 }, 00:30:07.239 "method": "bdev_nvme_attach_controller" 00:30:07.239 },{ 00:30:07.239 "params": { 00:30:07.239 "name": "Nvme1", 00:30:07.239 "trtype": "tcp", 00:30:07.239 "traddr": "10.0.0.3", 00:30:07.239 "adrfam": "ipv4", 00:30:07.239 "trsvcid": "4420", 00:30:07.239 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:07.239 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:07.239 "hdgst": false, 00:30:07.239 "ddgst": false 00:30:07.239 }, 00:30:07.239 "method": "bdev_nvme_attach_controller" 00:30:07.239 }' 00:30:07.239 14:34:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:30:07.239 14:34:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:30:07.239 14:34:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # break 00:30:07.239 14:34:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:30:07.239 14:34:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:07.499 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:30:07.499 ... 00:30:07.499 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:30:07.499 ... 00:30:07.499 fio-3.35 00:30:07.499 Starting 4 threads 00:30:14.069 00:30:14.069 filename0: (groupid=0, jobs=1): err= 0: pid=90748: Wed Nov 6 14:34:40 2024 00:30:14.069 read: IOPS=2154, BW=16.8MiB/s (17.7MB/s)(84.2MiB/5002msec) 00:30:14.069 slat (nsec): min=6573, max=92910, avg=23611.55, stdev=14123.58 00:30:14.069 clat (usec): min=448, max=7981, avg=3618.67, stdev=676.05 00:30:14.069 lat (usec): min=474, max=8020, avg=3642.28, stdev=675.97 00:30:14.069 clat percentiles (usec): 00:30:14.069 | 1.00th=[ 1827], 5.00th=[ 2180], 10.00th=[ 2606], 20.00th=[ 3261], 00:30:14.069 | 30.00th=[ 3458], 40.00th=[ 3523], 50.00th=[ 3621], 60.00th=[ 3916], 00:30:14.069 | 70.00th=[ 4047], 80.00th=[ 4146], 90.00th=[ 4293], 95.00th=[ 4424], 00:30:14.069 | 99.00th=[ 4883], 99.50th=[ 5145], 99.90th=[ 6521], 99.95th=[ 7963], 00:30:14.069 | 99.99th=[ 7963] 00:30:14.069 bw ( KiB/s): min=16016, max=19152, per=23.91%, avg=17238.40, stdev=992.03, samples=10 00:30:14.069 iops : min= 2002, max= 2394, avg=2154.80, stdev=124.00, samples=10 00:30:14.069 lat (usec) : 500=0.01% 00:30:14.069 lat (msec) : 2=3.53%, 4=62.15%, 10=34.32% 00:30:14.069 cpu : usr=94.18%, sys=5.02%, ctx=86, majf=0, minf=1074 00:30:14.069 IO depths : 1=3.5%, 2=16.8%, 4=54.7%, 8=25.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:14.069 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:14.069 complete : 0=0.0%, 4=93.2%, 8=6.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:14.069 issued rwts: total=10779,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:14.069 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:14.069 filename0: (groupid=0, jobs=1): err= 0: pid=90749: Wed Nov 6 14:34:40 2024 00:30:14.069 read: IOPS=2363, BW=18.5MiB/s (19.4MB/s)(92.4MiB/5001msec) 00:30:14.069 slat (nsec): min=6237, max=82885, avg=24097.10, stdev=15370.24 00:30:14.069 clat (usec): min=618, max=7036, avg=3299.12, stdev=782.24 00:30:14.069 lat (usec): min=628, max=7053, avg=3323.21, stdev=782.35 00:30:14.069 clat percentiles (usec): 00:30:14.069 | 1.00th=[ 1565], 5.00th=[ 1893], 10.00th=[ 1975], 20.00th=[ 2442], 00:30:14.069 | 30.00th=[ 3130], 40.00th=[ 3359], 50.00th=[ 3458], 60.00th=[ 3556], 00:30:14.069 | 70.00th=[ 3720], 80.00th=[ 3949], 90.00th=[ 4146], 95.00th=[ 4293], 00:30:14.069 | 99.00th=[ 4752], 99.50th=[ 5014], 99.90th=[ 6194], 99.95th=[ 6325], 00:30:14.069 | 99.99th=[ 6652] 00:30:14.069 bw ( KiB/s): min=17280, max=20432, per=26.04%, avg=18778.67, stdev=1343.67, samples=9 00:30:14.069 iops : min= 2160, max= 2554, avg=2347.33, stdev=167.96, samples=9 00:30:14.069 lat (usec) : 750=0.01%, 1000=0.06% 00:30:14.069 lat (msec) : 2=11.13%, 4=70.88%, 10=17.92% 00:30:14.069 cpu : usr=94.52%, sys=4.62%, ctx=8, majf=0, minf=1074 00:30:14.069 IO depths : 1=3.2%, 2=9.2%, 4=59.2%, 8=28.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:14.069 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:14.069 complete : 0=0.0%, 4=96.0%, 8=4.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:14.069 issued rwts: total=11821,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:14.069 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:14.069 filename1: (groupid=0, jobs=1): err= 0: pid=90750: Wed Nov 6 14:34:40 2024 00:30:14.069 read: IOPS=2143, BW=16.7MiB/s (17.6MB/s)(83.8MiB/5001msec) 00:30:14.069 slat (usec): min=6, max=195, avg=21.91, stdev=13.67 00:30:14.069 clat (usec): min=691, max=6606, avg=3649.84, stdev=693.34 00:30:14.069 lat (usec): min=703, max=6633, avg=3671.76, stdev=692.52 00:30:14.069 clat percentiles (usec): 00:30:14.069 | 1.00th=[ 1811], 5.00th=[ 2180], 10.00th=[ 2606], 20.00th=[ 3294], 00:30:14.069 | 30.00th=[ 3523], 40.00th=[ 3589], 50.00th=[ 3687], 60.00th=[ 3949], 00:30:14.069 | 70.00th=[ 4047], 80.00th=[ 4146], 90.00th=[ 4359], 95.00th=[ 4490], 00:30:14.069 | 99.00th=[ 5080], 99.50th=[ 5276], 99.90th=[ 6128], 99.95th=[ 6390], 00:30:14.069 | 99.99th=[ 6587] 00:30:14.069 bw ( KiB/s): min=15344, max=20032, per=24.02%, avg=17317.33, stdev=1408.75, samples=9 00:30:14.069 iops : min= 1918, max= 2504, avg=2164.67, stdev=176.09, samples=9 00:30:14.069 lat (usec) : 750=0.01%, 1000=0.07% 00:30:14.069 lat (msec) : 2=3.49%, 4=60.26%, 10=36.17% 00:30:14.069 cpu : usr=93.98%, sys=4.92%, ctx=49, majf=0, minf=1072 00:30:14.069 IO depths : 1=3.2%, 2=17.2%, 4=54.4%, 8=25.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:14.069 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:14.069 complete : 0=0.0%, 4=93.0%, 8=7.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:14.069 issued rwts: total=10721,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:14.069 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:14.069 filename1: (groupid=0, jobs=1): err= 0: pid=90751: Wed Nov 6 14:34:40 2024 00:30:14.069 read: IOPS=2351, BW=18.4MiB/s (19.3MB/s)(91.9MiB/5002msec) 00:30:14.069 slat (nsec): min=4721, max=81083, avg=23013.07, stdev=15217.40 00:30:14.069 clat (usec): min=445, max=8066, avg=3319.15, stdev=806.79 00:30:14.069 lat (usec): min=457, max=8093, avg=3342.16, stdev=806.70 00:30:14.069 clat percentiles (usec): 00:30:14.069 | 1.00th=[ 1532], 5.00th=[ 1893], 10.00th=[ 1975], 20.00th=[ 2442], 00:30:14.069 | 30.00th=[ 3163], 40.00th=[ 3392], 50.00th=[ 3458], 60.00th=[ 3556], 00:30:14.069 | 70.00th=[ 3752], 80.00th=[ 3982], 90.00th=[ 4178], 95.00th=[ 4424], 00:30:14.069 | 99.00th=[ 4817], 99.50th=[ 5211], 99.90th=[ 5866], 99.95th=[ 6521], 00:30:14.069 | 99.99th=[ 6587] 00:30:14.069 bw ( KiB/s): min=17024, max=20832, per=26.09%, avg=18809.80, stdev=1360.87, samples=10 00:30:14.069 iops : min= 2128, max= 2604, avg=2351.20, stdev=170.14, samples=10 00:30:14.069 lat (usec) : 500=0.02%, 1000=0.04% 00:30:14.069 lat (msec) : 2=11.34%, 4=69.60%, 10=19.00% 00:30:14.069 cpu : usr=94.68%, sys=4.46%, ctx=10, majf=0, minf=1074 00:30:14.070 IO depths : 1=3.0%, 2=10.1%, 4=58.7%, 8=28.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:14.070 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:14.070 complete : 0=0.0%, 4=95.8%, 8=4.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:14.070 issued rwts: total=11762,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:14.070 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:14.070 00:30:14.070 Run status group 0 (all jobs): 00:30:14.070 READ: bw=70.4MiB/s (73.8MB/s), 16.7MiB/s-18.5MiB/s (17.6MB/s-19.4MB/s), io=352MiB (369MB), run=5001-5002msec 00:30:15.007 ----------------------------------------------------- 00:30:15.007 Suppressions used: 00:30:15.007 count bytes template 00:30:15.007 6 52 /usr/src/fio/parse.c 00:30:15.007 1 8 libtcmalloc_minimal.so 00:30:15.007 1 904 libcrypto.so 00:30:15.007 ----------------------------------------------------- 00:30:15.007 00:30:15.007 14:34:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:30:15.007 14:34:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:30:15.007 14:34:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:15.007 14:34:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:15.007 14:34:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:30:15.007 14:34:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:15.007 14:34:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:15.007 14:34:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:15.007 14:34:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:15.007 14:34:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:15.007 14:34:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:15.007 14:34:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:15.007 14:34:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:15.007 14:34:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:15.007 14:34:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:15.007 14:34:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:30:15.007 14:34:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:15.007 14:34:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:15.007 14:34:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:15.007 14:34:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:15.007 14:34:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:15.007 14:34:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:15.007 14:34:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:15.007 ************************************ 00:30:15.007 END TEST fio_dif_rand_params 00:30:15.007 ************************************ 00:30:15.007 14:34:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:15.007 00:30:15.007 real 0m28.744s 00:30:15.007 user 2m10.538s 00:30:15.007 sys 0m7.221s 00:30:15.007 14:34:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:15.007 14:34:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:15.007 14:34:42 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:30:15.007 14:34:42 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:30:15.007 14:34:42 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:15.007 14:34:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:15.008 ************************************ 00:30:15.008 START TEST fio_dif_digest 00:30:15.008 ************************************ 00:30:15.008 14:34:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1127 -- # fio_dif_digest 00:30:15.008 14:34:42 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:30:15.008 14:34:42 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:30:15.008 14:34:42 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:30:15.008 14:34:42 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:30:15.008 14:34:42 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:30:15.008 14:34:42 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:30:15.008 14:34:42 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:30:15.008 14:34:42 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:30:15.008 14:34:42 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:30:15.008 14:34:42 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:30:15.008 14:34:42 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:30:15.008 14:34:42 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:30:15.008 14:34:42 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:30:15.008 14:34:42 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:30:15.008 14:34:42 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:30:15.008 14:34:42 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:30:15.008 14:34:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:15.008 14:34:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:15.008 bdev_null0 00:30:15.008 14:34:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:15.008 14:34:42 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:15.008 14:34:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:15.008 14:34:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:15.008 14:34:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:15.008 14:34:42 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:15.008 14:34:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:15.008 14:34:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:15.008 14:34:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:15.008 14:34:42 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:30:15.008 14:34:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:15.008 14:34:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:15.008 [2024-11-06 14:34:42.525529] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:30:15.008 14:34:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:15.008 14:34:42 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:30:15.008 14:34:42 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:30:15.008 14:34:42 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:15.008 14:34:42 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:30:15.008 14:34:42 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:30:15.008 14:34:42 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:15.008 14:34:42 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:15.008 { 00:30:15.008 "params": { 00:30:15.008 "name": "Nvme$subsystem", 00:30:15.008 "trtype": "$TEST_TRANSPORT", 00:30:15.008 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:15.008 "adrfam": "ipv4", 00:30:15.008 "trsvcid": "$NVMF_PORT", 00:30:15.008 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:15.008 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:15.008 "hdgst": ${hdgst:-false}, 00:30:15.008 "ddgst": ${ddgst:-false} 00:30:15.008 }, 00:30:15.008 "method": "bdev_nvme_attach_controller" 00:30:15.008 } 00:30:15.008 EOF 00:30:15.008 )") 00:30:15.008 14:34:42 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:15.008 14:34:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:15.008 14:34:42 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:30:15.008 14:34:42 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:30:15.008 14:34:42 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:30:15.008 14:34:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:30:15.008 14:34:42 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:30:15.008 14:34:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:15.008 14:34:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local sanitizers 00:30:15.008 14:34:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:15.008 14:34:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # shift 00:30:15.008 14:34:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # local asan_lib= 00:30:15.008 14:34:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:30:15.008 14:34:42 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:30:15.008 14:34:42 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:30:15.008 14:34:42 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:30:15.008 14:34:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:15.008 14:34:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # grep libasan 00:30:15.008 14:34:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:30:15.008 14:34:42 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:30:15.008 14:34:42 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:15.008 "params": { 00:30:15.008 "name": "Nvme0", 00:30:15.008 "trtype": "tcp", 00:30:15.008 "traddr": "10.0.0.3", 00:30:15.008 "adrfam": "ipv4", 00:30:15.008 "trsvcid": "4420", 00:30:15.008 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:15.008 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:15.008 "hdgst": true, 00:30:15.008 "ddgst": true 00:30:15.008 }, 00:30:15.008 "method": "bdev_nvme_attach_controller" 00:30:15.008 }' 00:30:15.008 14:34:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:30:15.008 14:34:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:30:15.008 14:34:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # break 00:30:15.008 14:34:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:30:15.008 14:34:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:15.266 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:30:15.266 ... 00:30:15.266 fio-3.35 00:30:15.266 Starting 3 threads 00:30:27.488 00:30:27.488 filename0: (groupid=0, jobs=1): err= 0: pid=90867: Wed Nov 6 14:34:53 2024 00:30:27.488 read: IOPS=242, BW=30.3MiB/s (31.8MB/s)(303MiB/10006msec) 00:30:27.488 slat (nsec): min=7658, max=73376, avg=28069.68, stdev=16154.15 00:30:27.488 clat (usec): min=11842, max=14634, avg=12314.82, stdev=175.66 00:30:27.488 lat (usec): min=11859, max=14662, avg=12342.89, stdev=177.70 00:30:27.488 clat percentiles (usec): 00:30:27.488 | 1.00th=[11863], 5.00th=[11994], 10.00th=[12125], 20.00th=[12256], 00:30:27.488 | 30.00th=[12256], 40.00th=[12256], 50.00th=[12387], 60.00th=[12387], 00:30:27.488 | 70.00th=[12387], 80.00th=[12387], 90.00th=[12387], 95.00th=[12518], 00:30:27.488 | 99.00th=[12649], 99.50th=[12911], 99.90th=[14615], 99.95th=[14615], 00:30:27.488 | 99.99th=[14615] 00:30:27.488 bw ( KiB/s): min=29952, max=31488, per=33.35%, avg=31026.89, stdev=457.43, samples=19 00:30:27.488 iops : min= 234, max= 246, avg=242.32, stdev= 3.56, samples=19 00:30:27.488 lat (msec) : 20=100.00% 00:30:27.488 cpu : usr=95.39%, sys=4.09%, ctx=18, majf=0, minf=1072 00:30:27.488 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:27.488 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:27.488 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:27.488 issued rwts: total=2424,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:27.488 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:27.488 filename0: (groupid=0, jobs=1): err= 0: pid=90868: Wed Nov 6 14:34:53 2024 00:30:27.488 read: IOPS=242, BW=30.3MiB/s (31.8MB/s)(303MiB/10002msec) 00:30:27.488 slat (nsec): min=6798, max=94614, avg=18128.99, stdev=9976.25 00:30:27.488 clat (usec): min=9862, max=14574, avg=12333.68, stdev=189.19 00:30:27.488 lat (usec): min=9870, max=14669, avg=12351.81, stdev=190.03 00:30:27.488 clat percentiles (usec): 00:30:27.488 | 1.00th=[11994], 5.00th=[11994], 10.00th=[12125], 20.00th=[12256], 00:30:27.488 | 30.00th=[12387], 40.00th=[12387], 50.00th=[12387], 60.00th=[12387], 00:30:27.488 | 70.00th=[12387], 80.00th=[12387], 90.00th=[12387], 95.00th=[12518], 00:30:27.488 | 99.00th=[12649], 99.50th=[12911], 99.90th=[14484], 99.95th=[14615], 00:30:27.488 | 99.99th=[14615] 00:30:27.488 bw ( KiB/s): min=30658, max=31551, per=33.37%, avg=31040.16, stdev=399.97, samples=19 00:30:27.488 iops : min= 239, max= 246, avg=242.42, stdev= 3.15, samples=19 00:30:27.488 lat (msec) : 10=0.12%, 20=99.88% 00:30:27.488 cpu : usr=92.00%, sys=7.48%, ctx=14, majf=0, minf=1074 00:30:27.488 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:27.488 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:27.488 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:27.488 issued rwts: total=2424,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:27.488 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:27.488 filename0: (groupid=0, jobs=1): err= 0: pid=90869: Wed Nov 6 14:34:53 2024 00:30:27.488 read: IOPS=242, BW=30.3MiB/s (31.8MB/s)(303MiB/10011msec) 00:30:27.488 slat (nsec): min=7172, max=74209, avg=28087.56, stdev=16591.65 00:30:27.488 clat (usec): min=7688, max=14057, avg=12306.87, stdev=226.91 00:30:27.488 lat (usec): min=7696, max=14089, avg=12334.96, stdev=229.07 00:30:27.488 clat percentiles (usec): 00:30:27.488 | 1.00th=[11863], 5.00th=[11994], 10.00th=[12125], 20.00th=[12256], 00:30:27.488 | 30.00th=[12256], 40.00th=[12256], 50.00th=[12387], 60.00th=[12387], 00:30:27.488 | 70.00th=[12387], 80.00th=[12387], 90.00th=[12387], 95.00th=[12518], 00:30:27.488 | 99.00th=[12649], 99.50th=[12780], 99.90th=[14091], 99.95th=[14091], 00:30:27.488 | 99.99th=[14091] 00:30:27.488 bw ( KiB/s): min=30658, max=32127, per=33.40%, avg=31067.16, stdev=455.43, samples=19 00:30:27.488 iops : min= 239, max= 250, avg=242.58, stdev= 3.47, samples=19 00:30:27.488 lat (msec) : 10=0.12%, 20=99.88% 00:30:27.488 cpu : usr=95.22%, sys=4.27%, ctx=10, majf=0, minf=1075 00:30:27.488 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:27.488 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:27.488 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:27.488 issued rwts: total=2427,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:27.488 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:27.488 00:30:27.488 Run status group 0 (all jobs): 00:30:27.488 READ: bw=90.8MiB/s (95.2MB/s), 30.3MiB/s-30.3MiB/s (31.8MB/s-31.8MB/s), io=909MiB (954MB), run=10002-10011msec 00:30:27.748 ----------------------------------------------------- 00:30:27.748 Suppressions used: 00:30:27.748 count bytes template 00:30:27.748 5 44 /usr/src/fio/parse.c 00:30:27.748 1 8 libtcmalloc_minimal.so 00:30:27.748 1 904 libcrypto.so 00:30:27.748 ----------------------------------------------------- 00:30:27.748 00:30:27.748 14:34:55 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:30:27.748 14:34:55 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:30:27.748 14:34:55 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:30:27.748 14:34:55 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:27.748 14:34:55 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:30:27.748 14:34:55 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:27.748 14:34:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:27.748 14:34:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:27.748 14:34:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:27.748 14:34:55 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:27.748 14:34:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:27.748 14:34:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:27.748 14:34:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:27.748 00:30:27.748 real 0m12.724s 00:30:27.748 user 0m30.486s 00:30:27.748 sys 0m2.092s 00:30:27.748 14:34:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:27.748 14:34:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:27.748 ************************************ 00:30:27.748 END TEST fio_dif_digest 00:30:27.748 ************************************ 00:30:27.748 14:34:55 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:30:27.748 14:34:55 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:30:27.748 14:34:55 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:27.748 14:34:55 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:30:27.748 14:34:55 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:27.748 14:34:55 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:30:27.748 14:34:55 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:27.748 14:34:55 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:27.748 rmmod nvme_tcp 00:30:27.748 rmmod nvme_fabrics 00:30:27.748 rmmod nvme_keyring 00:30:27.748 14:34:55 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:28.007 14:34:55 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:30:28.007 14:34:55 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:30:28.007 14:34:55 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 90072 ']' 00:30:28.007 14:34:55 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 90072 00:30:28.007 14:34:55 nvmf_dif -- common/autotest_common.sh@952 -- # '[' -z 90072 ']' 00:30:28.007 14:34:55 nvmf_dif -- common/autotest_common.sh@956 -- # kill -0 90072 00:30:28.007 14:34:55 nvmf_dif -- common/autotest_common.sh@957 -- # uname 00:30:28.007 14:34:55 nvmf_dif -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:28.007 14:34:55 nvmf_dif -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 90072 00:30:28.007 14:34:55 nvmf_dif -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:30:28.007 14:34:55 nvmf_dif -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:30:28.007 killing process with pid 90072 00:30:28.007 14:34:55 nvmf_dif -- common/autotest_common.sh@970 -- # echo 'killing process with pid 90072' 00:30:28.007 14:34:55 nvmf_dif -- common/autotest_common.sh@971 -- # kill 90072 00:30:28.007 14:34:55 nvmf_dif -- common/autotest_common.sh@976 -- # wait 90072 00:30:29.385 14:34:56 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:30:29.386 14:34:56 nvmf_dif -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:30:29.644 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:29.644 Waiting for block devices as requested 00:30:29.903 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:30:29.903 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:30:29.903 14:34:57 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:29.903 14:34:57 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:29.903 14:34:57 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:30:29.903 14:34:57 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:30:29.903 14:34:57 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:29.903 14:34:57 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:30:29.903 14:34:57 nvmf_dif -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:29.903 14:34:57 nvmf_dif -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:30:29.903 14:34:57 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:30:29.904 14:34:57 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:30:29.904 14:34:57 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:30:30.198 14:34:57 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:30:30.198 14:34:57 nvmf_dif -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:30:30.198 14:34:57 nvmf_dif -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:30:30.198 14:34:57 nvmf_dif -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:30:30.198 14:34:57 nvmf_dif -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:30:30.198 14:34:57 nvmf_dif -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:30:30.198 14:34:57 nvmf_dif -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:30:30.198 14:34:57 nvmf_dif -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:30:30.198 14:34:57 nvmf_dif -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:30.198 14:34:57 nvmf_dif -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:30.198 14:34:57 nvmf_dif -- nvmf/common.sh@246 -- # remove_spdk_ns 00:30:30.198 14:34:57 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:30.198 14:34:57 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:30.198 14:34:57 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:30.198 14:34:57 nvmf_dif -- nvmf/common.sh@300 -- # return 0 00:30:30.198 00:30:30.198 real 1m12.096s 00:30:30.198 user 4m11.045s 00:30:30.198 sys 0m20.008s 00:30:30.198 14:34:57 nvmf_dif -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:30.198 14:34:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:30.198 ************************************ 00:30:30.198 END TEST nvmf_dif 00:30:30.198 ************************************ 00:30:30.456 14:34:57 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:30:30.456 14:34:57 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:30:30.456 14:34:57 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:30.456 14:34:57 -- common/autotest_common.sh@10 -- # set +x 00:30:30.456 ************************************ 00:30:30.456 START TEST nvmf_abort_qd_sizes 00:30:30.456 ************************************ 00:30:30.456 14:34:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:30:30.457 * Looking for test storage... 00:30:30.457 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:30:30.457 14:34:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:30.457 14:34:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lcov --version 00:30:30.457 14:34:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:30.457 14:34:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:30.457 14:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:30.457 14:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:30.457 14:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:30.457 14:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:30:30.457 14:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:30:30.457 14:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:30:30.457 14:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:30:30.457 14:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:30:30.457 14:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:30:30.457 14:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:30:30.457 14:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:30.457 14:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:30:30.457 14:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:30:30.457 14:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:30.457 14:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:30.457 14:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:30:30.457 14:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:30:30.457 14:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:30.457 14:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:30:30.457 14:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:30:30.457 14:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:30:30.457 14:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:30:30.457 14:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:30.457 14:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:30:30.457 14:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:30:30.457 14:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:30.457 14:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:30.457 14:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:30:30.457 14:34:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:30.457 14:34:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:30.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:30.457 --rc genhtml_branch_coverage=1 00:30:30.457 --rc genhtml_function_coverage=1 00:30:30.457 --rc genhtml_legend=1 00:30:30.457 --rc geninfo_all_blocks=1 00:30:30.457 --rc geninfo_unexecuted_blocks=1 00:30:30.457 00:30:30.457 ' 00:30:30.457 14:34:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:30.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:30.457 --rc genhtml_branch_coverage=1 00:30:30.457 --rc genhtml_function_coverage=1 00:30:30.457 --rc genhtml_legend=1 00:30:30.457 --rc geninfo_all_blocks=1 00:30:30.457 --rc geninfo_unexecuted_blocks=1 00:30:30.457 00:30:30.457 ' 00:30:30.457 14:34:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:30.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:30.457 --rc genhtml_branch_coverage=1 00:30:30.457 --rc genhtml_function_coverage=1 00:30:30.457 --rc genhtml_legend=1 00:30:30.457 --rc geninfo_all_blocks=1 00:30:30.457 --rc geninfo_unexecuted_blocks=1 00:30:30.457 00:30:30.457 ' 00:30:30.457 14:34:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:30.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:30.457 --rc genhtml_branch_coverage=1 00:30:30.457 --rc genhtml_function_coverage=1 00:30:30.457 --rc genhtml_legend=1 00:30:30.457 --rc geninfo_all_blocks=1 00:30:30.457 --rc geninfo_unexecuted_blocks=1 00:30:30.457 00:30:30.457 ' 00:30:30.457 14:34:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:30:30.457 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:30:30.457 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:30.457 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:30.457 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:30.457 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:30.457 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:30.457 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:30.457 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:30.457 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:30.457 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:30.457 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:30.457 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:30:30.457 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:30:30.457 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:30.457 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:30.457 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:30:30.457 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:30.457 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:30.457 14:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:30:30.716 14:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:30.716 14:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:30.716 14:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:30.716 14:34:58 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:30.716 14:34:58 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:30.716 14:34:58 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:30.716 14:34:58 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:30:30.716 14:34:58 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:30.716 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:30:30.716 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:30.716 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:30.716 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:30.716 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:30.716 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:30.716 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:30.716 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:30.716 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:30.716 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:30.716 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:30.716 14:34:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:30:30.716 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:30.716 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:30.716 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:30.716 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:30.716 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:30.716 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:30.716 14:34:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:30.716 14:34:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:30.717 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:30:30.717 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:30:30.717 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:30:30.717 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:30:30.717 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:30:30.717 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@460 -- # nvmf_veth_init 00:30:30.717 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:30.717 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:30:30.717 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:30:30.717 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:30:30.717 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:30.717 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:30:30.717 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:30:30.717 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:30:30.717 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:30:30.717 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:30:30.717 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:30:30.717 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:30.717 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:30:30.717 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:30:30.717 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:30:30.717 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:30:30.717 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:30:30.717 Cannot find device "nvmf_init_br" 00:30:30.717 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:30:30.717 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:30:30.717 Cannot find device "nvmf_init_br2" 00:30:30.717 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:30:30.717 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:30:30.717 Cannot find device "nvmf_tgt_br" 00:30:30.717 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true 00:30:30.717 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:30:30.717 Cannot find device "nvmf_tgt_br2" 00:30:30.717 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true 00:30:30.717 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:30:30.717 Cannot find device "nvmf_init_br" 00:30:30.717 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 00:30:30.717 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:30:30.717 Cannot find device "nvmf_init_br2" 00:30:30.717 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 00:30:30.717 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:30:30.717 Cannot find device "nvmf_tgt_br" 00:30:30.717 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true 00:30:30.717 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:30:30.717 Cannot find device "nvmf_tgt_br2" 00:30:30.717 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true 00:30:30.717 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:30:30.717 Cannot find device "nvmf_br" 00:30:30.717 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true 00:30:30.717 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:30:30.717 Cannot find device "nvmf_init_if" 00:30:30.717 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # true 00:30:30.717 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:30:30.717 Cannot find device "nvmf_init_if2" 00:30:30.717 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # true 00:30:30.717 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:30.717 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:30.717 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # true 00:30:30.717 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:30.717 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:30.717 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # true 00:30:30.717 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:30:30.717 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:30:30.717 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:30:30.717 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:30:30.976 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:30:30.976 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:30:30.976 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:30:30.976 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:30:30.976 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:30:30.976 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:30:30.976 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:30:30.976 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:30:30.976 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:30:30.976 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:30:30.976 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:30:30.976 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:30:30.976 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:30:30.976 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:30:30.976 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:30:30.976 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:30:30.976 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:30:30.976 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:30:30.976 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:30:30.976 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:30:30.976 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:30:30.976 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:30:30.976 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:30:30.976 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:30:30.976 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:30:30.977 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:30:30.977 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:30:30.977 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:30:30.977 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:30:30.977 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:30:30.977 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:30:30.977 00:30:30.977 --- 10.0.0.3 ping statistics --- 00:30:30.977 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:30.977 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:30:30.977 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:30:30.977 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:30:30.977 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.085 ms 00:30:30.977 00:30:30.977 --- 10.0.0.4 ping statistics --- 00:30:30.977 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:30.977 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:30:30.977 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:30:30.977 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:30.977 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:30:30.977 00:30:30.977 --- 10.0.0.1 ping statistics --- 00:30:30.977 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:30.977 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:30:30.977 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:30:30.977 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:30.977 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.097 ms 00:30:30.977 00:30:30.977 --- 10.0.0.2 ping statistics --- 00:30:30.977 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:30.977 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:30:30.977 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:30.977 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@461 -- # return 0 00:30:30.977 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:30:30.977 14:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:30:31.914 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:31.914 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:30:31.914 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:30:32.173 14:34:59 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:32.173 14:34:59 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:32.173 14:34:59 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:32.173 14:34:59 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:32.173 14:34:59 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:32.173 14:34:59 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:32.173 14:34:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:30:32.173 14:34:59 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:32.173 14:34:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:32.173 14:34:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:32.173 14:34:59 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=91542 00:30:32.173 14:34:59 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:30:32.173 14:34:59 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 91542 00:30:32.173 14:34:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # '[' -z 91542 ']' 00:30:32.173 14:34:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:32.173 14:34:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:32.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:32.173 14:34:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:32.173 14:34:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:32.173 14:34:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:32.173 [2024-11-06 14:34:59.737124] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:30:32.173 [2024-11-06 14:34:59.737257] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:32.432 [2024-11-06 14:34:59.922753] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:32.691 [2024-11-06 14:35:00.068010] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:32.691 [2024-11-06 14:35:00.068067] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:32.691 [2024-11-06 14:35:00.068083] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:32.691 [2024-11-06 14:35:00.068095] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:32.691 [2024-11-06 14:35:00.068108] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:32.691 [2024-11-06 14:35:00.070702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:32.691 [2024-11-06 14:35:00.070880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:32.692 [2024-11-06 14:35:00.071109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:32.692 [2024-11-06 14:35:00.071136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:32.692 [2024-11-06 14:35:00.318704] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:30:32.951 14:35:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:32.951 14:35:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@866 -- # return 0 00:30:32.951 14:35:00 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:32.951 14:35:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:32.951 14:35:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:33.211 14:35:00 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:33.211 14:35:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:30:33.211 14:35:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:30:33.211 14:35:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:30:33.211 14:35:00 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:30:33.211 14:35:00 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:30:33.211 14:35:00 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 00:30:33.211 14:35:00 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:30:33.211 14:35:00 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:30:33.211 14:35:00 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 00:30:33.211 14:35:00 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:30:33.211 14:35:00 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 00:30:33.211 14:35:00 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 00:30:33.211 14:35:00 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 00:30:33.211 14:35:00 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 00:30:33.211 14:35:00 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 00:30:33.212 14:35:00 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 00:30:33.212 14:35:00 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 00:30:33.212 14:35:00 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 00:30:33.212 14:35:00 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 00:30:33.212 14:35:00 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 00:30:33.212 14:35:00 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:30:33.212 14:35:00 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 00:30:33.212 14:35:00 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 00:30:33.212 14:35:00 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:30:33.212 14:35:00 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 00:30:33.212 14:35:00 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:30:33.212 14:35:00 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:30:33.212 14:35:00 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:30:33.212 14:35:00 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:30:33.212 14:35:00 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:30:33.212 14:35:00 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:30:33.212 14:35:00 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:30:33.212 14:35:00 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:30:33.212 14:35:00 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:30:33.212 14:35:00 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:30:33.212 14:35:00 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:30:33.212 14:35:00 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:30:33.212 14:35:00 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:30:33.212 14:35:00 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:30:33.212 14:35:00 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:30:33.212 14:35:00 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:30:33.212 14:35:00 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:30:33.212 14:35:00 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:30:33.212 14:35:00 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:30:33.212 14:35:00 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:30:33.212 14:35:00 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:30:33.212 14:35:00 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:30:33.212 14:35:00 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:30:33.212 14:35:00 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:30:33.212 14:35:00 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 00:30:33.212 14:35:00 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:30:33.212 14:35:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:30:33.212 14:35:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:30:33.212 14:35:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:30:33.212 14:35:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:30:33.212 14:35:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:33.212 14:35:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:33.212 ************************************ 00:30:33.212 START TEST spdk_target_abort 00:30:33.212 ************************************ 00:30:33.212 14:35:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1127 -- # spdk_target 00:30:33.212 14:35:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:30:33.212 14:35:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:30:33.212 14:35:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:33.212 14:35:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:33.212 spdk_targetn1 00:30:33.212 14:35:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:33.212 14:35:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:33.212 14:35:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:33.212 14:35:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:33.212 [2024-11-06 14:35:00.763695] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:33.212 14:35:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:33.212 14:35:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:30:33.212 14:35:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:33.212 14:35:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:33.212 14:35:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:33.212 14:35:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:30:33.212 14:35:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:33.212 14:35:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:33.212 14:35:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:33.212 14:35:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420 00:30:33.212 14:35:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:33.212 14:35:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:33.212 [2024-11-06 14:35:00.818699] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:30:33.212 14:35:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:33.212 14:35:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn 00:30:33.212 14:35:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:30:33.212 14:35:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:30:33.212 14:35:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3 00:30:33.212 14:35:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:30:33.212 14:35:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:30:33.212 14:35:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:30:33.212 14:35:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:30:33.212 14:35:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:30:33.212 14:35:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:33.212 14:35:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:30:33.212 14:35:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:33.212 14:35:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:30:33.212 14:35:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:33.212 14:35:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3' 00:30:33.212 14:35:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:33.212 14:35:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:30:33.212 14:35:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:33.212 14:35:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:33.212 14:35:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:33.212 14:35:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:37.404 Initializing NVMe Controllers 00:30:37.404 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:30:37.404 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:37.404 Initialization complete. Launching workers. 00:30:37.404 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11038, failed: 0 00:30:37.404 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1038, failed to submit 10000 00:30:37.404 success 829, unsuccessful 209, failed 0 00:30:37.404 14:35:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:37.404 14:35:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:40.694 Initializing NVMe Controllers 00:30:40.694 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:30:40.694 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:40.694 Initialization complete. Launching workers. 00:30:40.694 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8928, failed: 0 00:30:40.694 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1171, failed to submit 7757 00:30:40.694 success 404, unsuccessful 767, failed 0 00:30:40.694 14:35:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:40.694 14:35:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:43.981 Initializing NVMe Controllers 00:30:43.981 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:30:43.981 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:43.981 Initialization complete. Launching workers. 00:30:43.982 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31731, failed: 0 00:30:43.982 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2357, failed to submit 29374 00:30:43.982 success 507, unsuccessful 1850, failed 0 00:30:43.982 14:35:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:30:43.982 14:35:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.982 14:35:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:43.982 14:35:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.982 14:35:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:30:43.982 14:35:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.982 14:35:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:43.982 14:35:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.982 14:35:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 91542 00:30:43.982 14:35:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # '[' -z 91542 ']' 00:30:43.982 14:35:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # kill -0 91542 00:30:43.982 14:35:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # uname 00:30:43.982 14:35:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:43.982 14:35:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 91542 00:30:43.982 14:35:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:30:43.982 14:35:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:30:43.982 killing process with pid 91542 00:30:43.982 14:35:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 91542' 00:30:43.982 14:35:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@971 -- # kill 91542 00:30:43.982 14:35:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@976 -- # wait 91542 00:30:45.360 00:30:45.360 real 0m11.865s 00:30:45.360 user 0m46.404s 00:30:45.360 sys 0m2.839s 00:30:45.360 14:35:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:45.360 14:35:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:45.360 ************************************ 00:30:45.360 END TEST spdk_target_abort 00:30:45.360 ************************************ 00:30:45.360 14:35:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:30:45.360 14:35:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:30:45.360 14:35:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:45.360 14:35:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:45.360 ************************************ 00:30:45.360 START TEST kernel_target_abort 00:30:45.360 ************************************ 00:30:45.360 14:35:12 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1127 -- # kernel_target 00:30:45.360 14:35:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:30:45.360 14:35:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:30:45.360 14:35:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:45.360 14:35:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:45.360 14:35:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:45.360 14:35:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:45.360 14:35:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:45.360 14:35:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:45.360 14:35:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:45.360 14:35:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:45.360 14:35:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:45.360 14:35:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:30:45.360 14:35:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:30:45.360 14:35:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:30:45.360 14:35:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:45.360 14:35:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:45.360 14:35:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:30:45.360 14:35:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:30:45.360 14:35:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:30:45.360 14:35:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:30:45.360 14:35:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:30:45.360 14:35:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:30:45.620 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:45.620 Waiting for block devices as requested 00:30:45.879 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:30:45.879 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:30:46.449 14:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:30:46.449 14:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:30:46.449 14:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:30:46.449 14:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:30:46.449 14:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:30:46.449 14:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:30:46.449 14:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:30:46.449 14:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:30:46.449 14:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:30:46.449 No valid GPT data, bailing 00:30:46.449 14:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:30:46.449 14:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:30:46.449 14:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:30:46.449 14:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:30:46.449 14:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:30:46.449 14:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:30:46.449 14:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:30:46.449 14:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:30:46.449 14:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:30:46.449 14:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:30:46.449 14:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:30:46.449 14:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:30:46.449 14:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:30:46.449 No valid GPT data, bailing 00:30:46.449 14:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:30:46.449 14:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:30:46.449 14:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:30:46.449 14:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:30:46.449 14:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:30:46.449 14:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:30:46.449 14:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:30:46.449 14:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:30:46.449 14:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:30:46.449 14:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:30:46.449 14:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:30:46.449 14:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:30:46.449 14:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:30:46.449 No valid GPT data, bailing 00:30:46.449 14:35:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:30:46.449 14:35:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:30:46.449 14:35:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:30:46.449 14:35:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:30:46.449 14:35:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:30:46.449 14:35:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:30:46.449 14:35:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:30:46.449 14:35:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:30:46.449 14:35:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:30:46.449 14:35:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:30:46.449 14:35:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:30:46.449 14:35:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:30:46.449 14:35:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:30:46.449 No valid GPT data, bailing 00:30:46.449 14:35:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:30:46.709 14:35:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:30:46.709 14:35:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:30:46.709 14:35:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:30:46.709 14:35:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:30:46.709 14:35:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:46.709 14:35:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:46.709 14:35:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:30:46.709 14:35:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:30:46.709 14:35:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:30:46.709 14:35:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:30:46.709 14:35:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:30:46.709 14:35:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:30:46.709 14:35:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:30:46.709 14:35:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:30:46.709 14:35:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:30:46.709 14:35:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:30:46.709 14:35:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 --hostid=406d54d0-5e94-472a-a2b3-4291f3ac81e0 -a 10.0.0.1 -t tcp -s 4420 00:30:46.709 00:30:46.709 Discovery Log Number of Records 2, Generation counter 2 00:30:46.709 =====Discovery Log Entry 0====== 00:30:46.709 trtype: tcp 00:30:46.709 adrfam: ipv4 00:30:46.709 subtype: current discovery subsystem 00:30:46.709 treq: not specified, sq flow control disable supported 00:30:46.709 portid: 1 00:30:46.709 trsvcid: 4420 00:30:46.709 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:30:46.709 traddr: 10.0.0.1 00:30:46.709 eflags: none 00:30:46.709 sectype: none 00:30:46.709 =====Discovery Log Entry 1====== 00:30:46.709 trtype: tcp 00:30:46.709 adrfam: ipv4 00:30:46.709 subtype: nvme subsystem 00:30:46.709 treq: not specified, sq flow control disable supported 00:30:46.709 portid: 1 00:30:46.709 trsvcid: 4420 00:30:46.709 subnqn: nqn.2016-06.io.spdk:testnqn 00:30:46.709 traddr: 10.0.0.1 00:30:46.709 eflags: none 00:30:46.709 sectype: none 00:30:46.709 14:35:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:30:46.709 14:35:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:30:46.709 14:35:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:30:46.709 14:35:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:30:46.709 14:35:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:30:46.709 14:35:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:30:46.709 14:35:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:30:46.709 14:35:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:30:46.709 14:35:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:30:46.709 14:35:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:46.709 14:35:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:30:46.709 14:35:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:46.709 14:35:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:30:46.709 14:35:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:46.709 14:35:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:30:46.709 14:35:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:46.709 14:35:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:30:46.709 14:35:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:46.709 14:35:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:46.709 14:35:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:46.709 14:35:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:50.000 Initializing NVMe Controllers 00:30:50.000 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:30:50.000 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:50.001 Initialization complete. Launching workers. 00:30:50.001 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 33341, failed: 0 00:30:50.001 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 33341, failed to submit 0 00:30:50.001 success 0, unsuccessful 33341, failed 0 00:30:50.001 14:35:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:50.001 14:35:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:53.291 Initializing NVMe Controllers 00:30:53.291 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:30:53.291 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:53.291 Initialization complete. Launching workers. 00:30:53.291 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 66164, failed: 0 00:30:53.291 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 32007, failed to submit 34157 00:30:53.291 success 0, unsuccessful 32007, failed 0 00:30:53.291 14:35:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:53.291 14:35:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:56.576 Initializing NVMe Controllers 00:30:56.576 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:30:56.576 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:56.576 Initialization complete. Launching workers. 00:30:56.576 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 85431, failed: 0 00:30:56.576 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 21340, failed to submit 64091 00:30:56.576 success 0, unsuccessful 21340, failed 0 00:30:56.576 14:35:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:30:56.576 14:35:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:30:56.576 14:35:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:30:56.576 14:35:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:56.576 14:35:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:56.576 14:35:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:30:56.576 14:35:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:56.576 14:35:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:30:56.576 14:35:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:30:56.576 14:35:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:30:57.512 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:58.448 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:30:58.448 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:30:58.448 ************************************ 00:30:58.448 END TEST kernel_target_abort 00:30:58.448 ************************************ 00:30:58.448 00:30:58.448 real 0m13.358s 00:30:58.448 user 0m6.745s 00:30:58.448 sys 0m4.214s 00:30:58.448 14:35:25 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:58.448 14:35:25 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:58.448 14:35:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:30:58.448 14:35:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:30:58.448 14:35:26 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:58.448 14:35:26 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:30:58.448 14:35:26 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:58.448 14:35:26 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:30:58.448 14:35:26 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:58.448 14:35:26 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:58.707 rmmod nvme_tcp 00:30:58.707 rmmod nvme_fabrics 00:30:58.707 rmmod nvme_keyring 00:30:58.707 14:35:26 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:58.707 14:35:26 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:30:58.707 14:35:26 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:30:58.707 14:35:26 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 91542 ']' 00:30:58.707 14:35:26 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 91542 00:30:58.707 14:35:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # '[' -z 91542 ']' 00:30:58.707 14:35:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@956 -- # kill -0 91542 00:30:58.707 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (91542) - No such process 00:30:58.707 Process with pid 91542 is not found 00:30:58.707 14:35:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@979 -- # echo 'Process with pid 91542 is not found' 00:30:58.707 14:35:26 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:30:58.707 14:35:26 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:30:58.966 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:59.225 Waiting for block devices as requested 00:30:59.225 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:30:59.225 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:30:59.484 14:35:26 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:59.484 14:35:26 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:59.484 14:35:26 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:30:59.484 14:35:26 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:59.484 14:35:26 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:30:59.484 14:35:26 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:30:59.484 14:35:26 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:59.484 14:35:26 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:30:59.484 14:35:26 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:30:59.484 14:35:26 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:30:59.484 14:35:26 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:30:59.484 14:35:26 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:30:59.484 14:35:26 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:30:59.484 14:35:26 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:30:59.484 14:35:26 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:30:59.484 14:35:26 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:30:59.484 14:35:26 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:30:59.484 14:35:27 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:30:59.484 14:35:27 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:30:59.484 14:35:27 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:59.484 14:35:27 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:59.484 14:35:27 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # remove_spdk_ns 00:30:59.484 14:35:27 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:59.484 14:35:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:59.484 14:35:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:59.743 14:35:27 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # return 0 00:30:59.743 00:30:59.743 real 0m29.315s 00:30:59.743 user 0m54.469s 00:30:59.743 sys 0m9.045s 00:30:59.743 14:35:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:59.743 ************************************ 00:30:59.743 END TEST nvmf_abort_qd_sizes 00:30:59.743 ************************************ 00:30:59.743 14:35:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:59.743 14:35:27 -- spdk/autotest.sh@288 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:30:59.743 14:35:27 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:30:59.744 14:35:27 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:59.744 14:35:27 -- common/autotest_common.sh@10 -- # set +x 00:30:59.744 ************************************ 00:30:59.744 START TEST keyring_file 00:30:59.744 ************************************ 00:30:59.744 14:35:27 keyring_file -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:30:59.744 * Looking for test storage... 00:30:59.744 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:30:59.744 14:35:27 keyring_file -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:59.744 14:35:27 keyring_file -- common/autotest_common.sh@1691 -- # lcov --version 00:30:59.744 14:35:27 keyring_file -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:00.003 14:35:27 keyring_file -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:00.003 14:35:27 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:00.003 14:35:27 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:00.003 14:35:27 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:00.003 14:35:27 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:31:00.003 14:35:27 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:31:00.003 14:35:27 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:31:00.003 14:35:27 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:31:00.003 14:35:27 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:31:00.003 14:35:27 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:31:00.003 14:35:27 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:31:00.003 14:35:27 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:00.003 14:35:27 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:31:00.003 14:35:27 keyring_file -- scripts/common.sh@345 -- # : 1 00:31:00.003 14:35:27 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:00.003 14:35:27 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:00.003 14:35:27 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:31:00.003 14:35:27 keyring_file -- scripts/common.sh@353 -- # local d=1 00:31:00.003 14:35:27 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:00.003 14:35:27 keyring_file -- scripts/common.sh@355 -- # echo 1 00:31:00.003 14:35:27 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:31:00.003 14:35:27 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:31:00.003 14:35:27 keyring_file -- scripts/common.sh@353 -- # local d=2 00:31:00.003 14:35:27 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:00.003 14:35:27 keyring_file -- scripts/common.sh@355 -- # echo 2 00:31:00.003 14:35:27 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:31:00.003 14:35:27 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:00.003 14:35:27 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:00.003 14:35:27 keyring_file -- scripts/common.sh@368 -- # return 0 00:31:00.003 14:35:27 keyring_file -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:00.003 14:35:27 keyring_file -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:00.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:00.003 --rc genhtml_branch_coverage=1 00:31:00.003 --rc genhtml_function_coverage=1 00:31:00.003 --rc genhtml_legend=1 00:31:00.003 --rc geninfo_all_blocks=1 00:31:00.003 --rc geninfo_unexecuted_blocks=1 00:31:00.003 00:31:00.003 ' 00:31:00.003 14:35:27 keyring_file -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:00.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:00.003 --rc genhtml_branch_coverage=1 00:31:00.003 --rc genhtml_function_coverage=1 00:31:00.004 --rc genhtml_legend=1 00:31:00.004 --rc geninfo_all_blocks=1 00:31:00.004 --rc geninfo_unexecuted_blocks=1 00:31:00.004 00:31:00.004 ' 00:31:00.004 14:35:27 keyring_file -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:00.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:00.004 --rc genhtml_branch_coverage=1 00:31:00.004 --rc genhtml_function_coverage=1 00:31:00.004 --rc genhtml_legend=1 00:31:00.004 --rc geninfo_all_blocks=1 00:31:00.004 --rc geninfo_unexecuted_blocks=1 00:31:00.004 00:31:00.004 ' 00:31:00.004 14:35:27 keyring_file -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:00.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:00.004 --rc genhtml_branch_coverage=1 00:31:00.004 --rc genhtml_function_coverage=1 00:31:00.004 --rc genhtml_legend=1 00:31:00.004 --rc geninfo_all_blocks=1 00:31:00.004 --rc geninfo_unexecuted_blocks=1 00:31:00.004 00:31:00.004 ' 00:31:00.004 14:35:27 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:31:00.004 14:35:27 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:31:00.004 14:35:27 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:31:00.004 14:35:27 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:00.004 14:35:27 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:00.004 14:35:27 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:00.004 14:35:27 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:00.004 14:35:27 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:00.004 14:35:27 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:00.004 14:35:27 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:00.004 14:35:27 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:00.004 14:35:27 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:00.004 14:35:27 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:00.004 14:35:27 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:31:00.004 14:35:27 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:31:00.004 14:35:27 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:00.004 14:35:27 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:00.004 14:35:27 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:31:00.004 14:35:27 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:00.004 14:35:27 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:00.004 14:35:27 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:31:00.004 14:35:27 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:00.004 14:35:27 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:00.004 14:35:27 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:00.004 14:35:27 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:00.004 14:35:27 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:00.004 14:35:27 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:00.004 14:35:27 keyring_file -- paths/export.sh@5 -- # export PATH 00:31:00.004 14:35:27 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:00.004 14:35:27 keyring_file -- nvmf/common.sh@51 -- # : 0 00:31:00.004 14:35:27 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:00.004 14:35:27 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:00.004 14:35:27 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:00.004 14:35:27 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:00.004 14:35:27 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:00.004 14:35:27 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:00.004 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:00.004 14:35:27 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:00.004 14:35:27 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:00.004 14:35:27 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:00.004 14:35:27 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:31:00.004 14:35:27 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:31:00.004 14:35:27 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:31:00.004 14:35:27 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:31:00.004 14:35:27 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:31:00.004 14:35:27 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:31:00.004 14:35:27 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:31:00.004 14:35:27 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:31:00.004 14:35:27 keyring_file -- keyring/common.sh@17 -- # name=key0 00:31:00.004 14:35:27 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:31:00.004 14:35:27 keyring_file -- keyring/common.sh@17 -- # digest=0 00:31:00.004 14:35:27 keyring_file -- keyring/common.sh@18 -- # mktemp 00:31:00.004 14:35:27 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.lQxP451TYl 00:31:00.004 14:35:27 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:31:00.004 14:35:27 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:31:00.004 14:35:27 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:31:00.004 14:35:27 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:31:00.004 14:35:27 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:31:00.004 14:35:27 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:31:00.004 14:35:27 keyring_file -- nvmf/common.sh@733 -- # python - 00:31:00.004 14:35:27 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.lQxP451TYl 00:31:00.004 14:35:27 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.lQxP451TYl 00:31:00.004 14:35:27 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.lQxP451TYl 00:31:00.004 14:35:27 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:31:00.004 14:35:27 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:31:00.004 14:35:27 keyring_file -- keyring/common.sh@17 -- # name=key1 00:31:00.004 14:35:27 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:31:00.004 14:35:27 keyring_file -- keyring/common.sh@17 -- # digest=0 00:31:00.004 14:35:27 keyring_file -- keyring/common.sh@18 -- # mktemp 00:31:00.004 14:35:27 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.NfmLUuyT6h 00:31:00.004 14:35:27 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:31:00.004 14:35:27 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:31:00.004 14:35:27 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:31:00.004 14:35:27 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:31:00.004 14:35:27 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:31:00.004 14:35:27 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:31:00.004 14:35:27 keyring_file -- nvmf/common.sh@733 -- # python - 00:31:00.004 14:35:27 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.NfmLUuyT6h 00:31:00.004 14:35:27 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.NfmLUuyT6h 00:31:00.004 14:35:27 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.NfmLUuyT6h 00:31:00.004 14:35:27 keyring_file -- keyring/file.sh@30 -- # tgtpid=92580 00:31:00.004 14:35:27 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:00.004 14:35:27 keyring_file -- keyring/file.sh@32 -- # waitforlisten 92580 00:31:00.004 14:35:27 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 92580 ']' 00:31:00.004 14:35:27 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:00.004 14:35:27 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:00.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:00.004 14:35:27 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:00.004 14:35:27 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:00.004 14:35:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:00.263 [2024-11-06 14:35:27.707484] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:31:00.264 [2024-11-06 14:35:27.707612] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92580 ] 00:31:00.264 [2024-11-06 14:35:27.887483] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:00.522 [2024-11-06 14:35:28.027266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:00.781 [2024-11-06 14:35:28.322659] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:31:01.718 14:35:29 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:01.718 14:35:29 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:31:01.719 14:35:29 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:31:01.719 14:35:29 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.719 14:35:29 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:01.719 [2024-11-06 14:35:29.029957] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:01.719 null0 00:31:01.719 [2024-11-06 14:35:29.061904] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:31:01.719 [2024-11-06 14:35:29.062209] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:01.719 14:35:29 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:01.719 14:35:29 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:31:01.719 14:35:29 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:31:01.719 14:35:29 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:31:01.719 14:35:29 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:31:01.719 14:35:29 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:01.719 14:35:29 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:31:01.719 14:35:29 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:01.719 14:35:29 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:31:01.719 14:35:29 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.719 14:35:29 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:01.719 [2024-11-06 14:35:29.093817] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:31:01.719 request: 00:31:01.719 { 00:31:01.719 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:31:01.719 "secure_channel": false, 00:31:01.719 "listen_address": { 00:31:01.719 "trtype": "tcp", 00:31:01.719 "traddr": "127.0.0.1", 00:31:01.719 "trsvcid": "4420" 00:31:01.719 }, 00:31:01.719 "method": "nvmf_subsystem_add_listener", 00:31:01.719 "req_id": 1 00:31:01.719 } 00:31:01.719 Got JSON-RPC error response 00:31:01.719 response: 00:31:01.719 { 00:31:01.719 "code": -32602, 00:31:01.719 "message": "Invalid parameters" 00:31:01.719 } 00:31:01.719 14:35:29 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:31:01.719 14:35:29 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:31:01.719 14:35:29 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:01.719 14:35:29 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:01.719 14:35:29 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:01.719 14:35:29 keyring_file -- keyring/file.sh@47 -- # bperfpid=92603 00:31:01.719 14:35:29 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:31:01.719 14:35:29 keyring_file -- keyring/file.sh@49 -- # waitforlisten 92603 /var/tmp/bperf.sock 00:31:01.719 14:35:29 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 92603 ']' 00:31:01.719 14:35:29 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:01.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:01.719 14:35:29 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:01.719 14:35:29 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:01.719 14:35:29 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:01.719 14:35:29 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:01.719 [2024-11-06 14:35:29.204004] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:31:01.719 [2024-11-06 14:35:29.204131] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92603 ] 00:31:01.978 [2024-11-06 14:35:29.387250] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:01.978 [2024-11-06 14:35:29.528353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:02.235 [2024-11-06 14:35:29.768963] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:31:02.493 14:35:30 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:02.493 14:35:30 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:31:02.493 14:35:30 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.lQxP451TYl 00:31:02.493 14:35:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.lQxP451TYl 00:31:02.751 14:35:30 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.NfmLUuyT6h 00:31:02.751 14:35:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.NfmLUuyT6h 00:31:03.009 14:35:30 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:31:03.009 14:35:30 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:31:03.009 14:35:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:03.009 14:35:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:03.009 14:35:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:03.009 14:35:30 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.lQxP451TYl == \/\t\m\p\/\t\m\p\.\l\Q\x\P\4\5\1\T\Y\l ]] 00:31:03.009 14:35:30 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:31:03.009 14:35:30 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:31:03.009 14:35:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:03.009 14:35:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:03.009 14:35:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:03.268 14:35:30 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.NfmLUuyT6h == \/\t\m\p\/\t\m\p\.\N\f\m\L\U\u\y\T\6\h ]] 00:31:03.268 14:35:30 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:31:03.268 14:35:30 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:03.268 14:35:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:03.268 14:35:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:03.268 14:35:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:03.268 14:35:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:03.526 14:35:31 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:31:03.526 14:35:31 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:31:03.526 14:35:31 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:03.526 14:35:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:03.526 14:35:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:03.526 14:35:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:03.526 14:35:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:03.785 14:35:31 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:31:03.785 14:35:31 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:03.785 14:35:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:04.044 [2024-11-06 14:35:31.503385] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:04.044 nvme0n1 00:31:04.044 14:35:31 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:31:04.044 14:35:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:04.044 14:35:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:04.044 14:35:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:04.044 14:35:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:04.044 14:35:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:04.303 14:35:31 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:31:04.303 14:35:31 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:31:04.303 14:35:31 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:04.303 14:35:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:04.303 14:35:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:04.303 14:35:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:04.303 14:35:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:04.562 14:35:32 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:31:04.562 14:35:32 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:04.562 Running I/O for 1 seconds... 00:31:05.940 12874.00 IOPS, 50.29 MiB/s 00:31:05.940 Latency(us) 00:31:05.940 [2024-11-06T14:35:33.575Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:05.940 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:31:05.940 nvme0n1 : 1.01 12922.31 50.48 0.00 0.00 9879.86 4263.79 17160.43 00:31:05.940 [2024-11-06T14:35:33.575Z] =================================================================================================================== 00:31:05.940 [2024-11-06T14:35:33.575Z] Total : 12922.31 50.48 0.00 0.00 9879.86 4263.79 17160.43 00:31:05.940 { 00:31:05.940 "results": [ 00:31:05.940 { 00:31:05.940 "job": "nvme0n1", 00:31:05.940 "core_mask": "0x2", 00:31:05.940 "workload": "randrw", 00:31:05.940 "percentage": 50, 00:31:05.940 "status": "finished", 00:31:05.940 "queue_depth": 128, 00:31:05.940 "io_size": 4096, 00:31:05.940 "runtime": 1.006244, 00:31:05.940 "iops": 12922.313077146298, 00:31:05.940 "mibps": 50.47778545760273, 00:31:05.940 "io_failed": 0, 00:31:05.940 "io_timeout": 0, 00:31:05.940 "avg_latency_us": 9879.855411062075, 00:31:05.940 "min_latency_us": 4263.787951807229, 00:31:05.940 "max_latency_us": 17160.430522088354 00:31:05.940 } 00:31:05.940 ], 00:31:05.940 "core_count": 1 00:31:05.940 } 00:31:05.940 14:35:33 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:31:05.940 14:35:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:31:05.940 14:35:33 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:31:05.940 14:35:33 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:05.940 14:35:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:05.940 14:35:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:05.940 14:35:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:05.940 14:35:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:06.199 14:35:33 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:31:06.199 14:35:33 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:31:06.199 14:35:33 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:06.199 14:35:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:06.199 14:35:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:06.199 14:35:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:06.199 14:35:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:06.458 14:35:33 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:31:06.458 14:35:33 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:31:06.458 14:35:33 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:31:06.458 14:35:33 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:31:06.458 14:35:33 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:31:06.458 14:35:33 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:06.458 14:35:33 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:31:06.458 14:35:33 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:06.458 14:35:33 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:31:06.458 14:35:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:31:06.717 [2024-11-06 14:35:34.109956] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:31:06.717 [2024-11-06 14:35:34.110610] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000030280 (107): Transport endpoint is not connected 00:31:06.717 [2024-11-06 14:35:34.111572] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000030280 (9): Bad file descriptor 00:31:06.717 [2024-11-06 14:35:34.112562] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:31:06.717 [2024-11-06 14:35:34.112595] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:31:06.717 [2024-11-06 14:35:34.112610] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:31:06.717 [2024-11-06 14:35:34.112624] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:31:06.717 request: 00:31:06.717 { 00:31:06.717 "name": "nvme0", 00:31:06.717 "trtype": "tcp", 00:31:06.717 "traddr": "127.0.0.1", 00:31:06.717 "adrfam": "ipv4", 00:31:06.717 "trsvcid": "4420", 00:31:06.717 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:06.717 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:06.717 "prchk_reftag": false, 00:31:06.717 "prchk_guard": false, 00:31:06.717 "hdgst": false, 00:31:06.717 "ddgst": false, 00:31:06.717 "psk": "key1", 00:31:06.717 "allow_unrecognized_csi": false, 00:31:06.717 "method": "bdev_nvme_attach_controller", 00:31:06.717 "req_id": 1 00:31:06.717 } 00:31:06.717 Got JSON-RPC error response 00:31:06.717 response: 00:31:06.717 { 00:31:06.717 "code": -5, 00:31:06.717 "message": "Input/output error" 00:31:06.717 } 00:31:06.717 14:35:34 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:31:06.717 14:35:34 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:06.717 14:35:34 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:06.717 14:35:34 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:06.717 14:35:34 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:31:06.717 14:35:34 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:06.717 14:35:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:06.717 14:35:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:06.717 14:35:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:06.717 14:35:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:06.717 14:35:34 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:31:06.977 14:35:34 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:31:06.977 14:35:34 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:06.977 14:35:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:06.977 14:35:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:06.977 14:35:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:06.977 14:35:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:06.977 14:35:34 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:31:06.977 14:35:34 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:31:06.977 14:35:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:31:07.236 14:35:34 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:31:07.236 14:35:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:31:07.495 14:35:34 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:31:07.495 14:35:34 keyring_file -- keyring/file.sh@78 -- # jq length 00:31:07.495 14:35:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:07.754 14:35:35 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:31:07.754 14:35:35 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.lQxP451TYl 00:31:07.754 14:35:35 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.lQxP451TYl 00:31:07.754 14:35:35 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:31:07.754 14:35:35 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.lQxP451TYl 00:31:07.754 14:35:35 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:31:07.754 14:35:35 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:07.755 14:35:35 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:31:07.755 14:35:35 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:07.755 14:35:35 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.lQxP451TYl 00:31:07.755 14:35:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.lQxP451TYl 00:31:07.755 [2024-11-06 14:35:35.365010] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.lQxP451TYl': 0100660 00:31:07.755 [2024-11-06 14:35:35.365073] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:31:07.755 request: 00:31:07.755 { 00:31:07.755 "name": "key0", 00:31:07.755 "path": "/tmp/tmp.lQxP451TYl", 00:31:07.755 "method": "keyring_file_add_key", 00:31:07.755 "req_id": 1 00:31:07.755 } 00:31:07.755 Got JSON-RPC error response 00:31:07.755 response: 00:31:07.755 { 00:31:07.755 "code": -1, 00:31:07.755 "message": "Operation not permitted" 00:31:07.755 } 00:31:07.755 14:35:35 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:31:07.755 14:35:35 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:07.755 14:35:35 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:07.755 14:35:35 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:07.755 14:35:35 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.lQxP451TYl 00:31:08.014 14:35:35 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.lQxP451TYl 00:31:08.014 14:35:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.lQxP451TYl 00:31:08.273 14:35:35 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.lQxP451TYl 00:31:08.273 14:35:35 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:31:08.273 14:35:35 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:08.273 14:35:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:08.274 14:35:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:08.274 14:35:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:08.274 14:35:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:08.274 14:35:35 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:31:08.274 14:35:35 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:08.274 14:35:35 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:31:08.274 14:35:35 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:08.274 14:35:35 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:31:08.274 14:35:35 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:08.274 14:35:35 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:31:08.274 14:35:35 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:08.274 14:35:35 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:08.274 14:35:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:08.533 [2024-11-06 14:35:36.057105] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.lQxP451TYl': No such file or directory 00:31:08.533 [2024-11-06 14:35:36.057162] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:31:08.533 [2024-11-06 14:35:36.057188] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:31:08.533 [2024-11-06 14:35:36.057201] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:31:08.533 [2024-11-06 14:35:36.057216] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:08.533 [2024-11-06 14:35:36.057228] bdev_nvme.c:6669:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:31:08.533 request: 00:31:08.533 { 00:31:08.533 "name": "nvme0", 00:31:08.533 "trtype": "tcp", 00:31:08.533 "traddr": "127.0.0.1", 00:31:08.533 "adrfam": "ipv4", 00:31:08.533 "trsvcid": "4420", 00:31:08.533 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:08.533 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:08.533 "prchk_reftag": false, 00:31:08.533 "prchk_guard": false, 00:31:08.533 "hdgst": false, 00:31:08.533 "ddgst": false, 00:31:08.533 "psk": "key0", 00:31:08.533 "allow_unrecognized_csi": false, 00:31:08.533 "method": "bdev_nvme_attach_controller", 00:31:08.533 "req_id": 1 00:31:08.533 } 00:31:08.533 Got JSON-RPC error response 00:31:08.533 response: 00:31:08.533 { 00:31:08.533 "code": -19, 00:31:08.533 "message": "No such device" 00:31:08.533 } 00:31:08.533 14:35:36 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:31:08.533 14:35:36 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:08.533 14:35:36 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:08.533 14:35:36 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:08.533 14:35:36 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:31:08.533 14:35:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:31:08.792 14:35:36 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:31:08.792 14:35:36 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:31:08.792 14:35:36 keyring_file -- keyring/common.sh@17 -- # name=key0 00:31:08.792 14:35:36 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:31:08.792 14:35:36 keyring_file -- keyring/common.sh@17 -- # digest=0 00:31:08.792 14:35:36 keyring_file -- keyring/common.sh@18 -- # mktemp 00:31:08.792 14:35:36 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.f6H2uZ7w5g 00:31:08.792 14:35:36 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:31:08.792 14:35:36 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:31:08.792 14:35:36 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:31:08.792 14:35:36 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:31:08.792 14:35:36 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:31:08.792 14:35:36 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:31:08.792 14:35:36 keyring_file -- nvmf/common.sh@733 -- # python - 00:31:08.792 14:35:36 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.f6H2uZ7w5g 00:31:08.792 14:35:36 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.f6H2uZ7w5g 00:31:08.792 14:35:36 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.f6H2uZ7w5g 00:31:08.792 14:35:36 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.f6H2uZ7w5g 00:31:08.792 14:35:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.f6H2uZ7w5g 00:31:09.051 14:35:36 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:09.051 14:35:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:09.310 nvme0n1 00:31:09.310 14:35:36 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:31:09.310 14:35:36 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:09.310 14:35:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:09.310 14:35:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:09.310 14:35:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:09.311 14:35:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:09.570 14:35:37 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:31:09.570 14:35:37 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:31:09.570 14:35:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:31:09.828 14:35:37 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:31:09.828 14:35:37 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:31:09.829 14:35:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:09.829 14:35:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:09.829 14:35:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:10.087 14:35:37 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:31:10.088 14:35:37 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:31:10.088 14:35:37 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:10.088 14:35:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:10.088 14:35:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:10.088 14:35:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:10.088 14:35:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:10.088 14:35:37 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:31:10.088 14:35:37 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:31:10.088 14:35:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:31:10.346 14:35:37 keyring_file -- keyring/file.sh@105 -- # jq length 00:31:10.346 14:35:37 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:31:10.346 14:35:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:10.606 14:35:38 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:31:10.606 14:35:38 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.f6H2uZ7w5g 00:31:10.606 14:35:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.f6H2uZ7w5g 00:31:10.864 14:35:38 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.NfmLUuyT6h 00:31:10.864 14:35:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.NfmLUuyT6h 00:31:11.123 14:35:38 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:11.123 14:35:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:11.381 nvme0n1 00:31:11.381 14:35:38 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:31:11.381 14:35:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:31:11.641 14:35:39 keyring_file -- keyring/file.sh@113 -- # config='{ 00:31:11.641 "subsystems": [ 00:31:11.641 { 00:31:11.641 "subsystem": "keyring", 00:31:11.641 "config": [ 00:31:11.641 { 00:31:11.641 "method": "keyring_file_add_key", 00:31:11.641 "params": { 00:31:11.641 "name": "key0", 00:31:11.641 "path": "/tmp/tmp.f6H2uZ7w5g" 00:31:11.641 } 00:31:11.641 }, 00:31:11.641 { 00:31:11.641 "method": "keyring_file_add_key", 00:31:11.641 "params": { 00:31:11.641 "name": "key1", 00:31:11.641 "path": "/tmp/tmp.NfmLUuyT6h" 00:31:11.641 } 00:31:11.641 } 00:31:11.641 ] 00:31:11.641 }, 00:31:11.641 { 00:31:11.641 "subsystem": "iobuf", 00:31:11.641 "config": [ 00:31:11.641 { 00:31:11.641 "method": "iobuf_set_options", 00:31:11.641 "params": { 00:31:11.641 "small_pool_count": 8192, 00:31:11.641 "large_pool_count": 1024, 00:31:11.641 "small_bufsize": 8192, 00:31:11.641 "large_bufsize": 135168, 00:31:11.641 "enable_numa": false 00:31:11.641 } 00:31:11.641 } 00:31:11.641 ] 00:31:11.641 }, 00:31:11.641 { 00:31:11.641 "subsystem": "sock", 00:31:11.641 "config": [ 00:31:11.641 { 00:31:11.641 "method": "sock_set_default_impl", 00:31:11.641 "params": { 00:31:11.641 "impl_name": "uring" 00:31:11.641 } 00:31:11.641 }, 00:31:11.641 { 00:31:11.641 "method": "sock_impl_set_options", 00:31:11.641 "params": { 00:31:11.641 "impl_name": "ssl", 00:31:11.641 "recv_buf_size": 4096, 00:31:11.641 "send_buf_size": 4096, 00:31:11.641 "enable_recv_pipe": true, 00:31:11.641 "enable_quickack": false, 00:31:11.641 "enable_placement_id": 0, 00:31:11.641 "enable_zerocopy_send_server": true, 00:31:11.641 "enable_zerocopy_send_client": false, 00:31:11.641 "zerocopy_threshold": 0, 00:31:11.641 "tls_version": 0, 00:31:11.641 "enable_ktls": false 00:31:11.641 } 00:31:11.641 }, 00:31:11.641 { 00:31:11.641 "method": "sock_impl_set_options", 00:31:11.641 "params": { 00:31:11.641 "impl_name": "posix", 00:31:11.641 "recv_buf_size": 2097152, 00:31:11.641 "send_buf_size": 2097152, 00:31:11.641 "enable_recv_pipe": true, 00:31:11.641 "enable_quickack": false, 00:31:11.641 "enable_placement_id": 0, 00:31:11.641 "enable_zerocopy_send_server": true, 00:31:11.641 "enable_zerocopy_send_client": false, 00:31:11.641 "zerocopy_threshold": 0, 00:31:11.641 "tls_version": 0, 00:31:11.642 "enable_ktls": false 00:31:11.642 } 00:31:11.642 }, 00:31:11.642 { 00:31:11.642 "method": "sock_impl_set_options", 00:31:11.642 "params": { 00:31:11.642 "impl_name": "uring", 00:31:11.642 "recv_buf_size": 2097152, 00:31:11.642 "send_buf_size": 2097152, 00:31:11.642 "enable_recv_pipe": true, 00:31:11.642 "enable_quickack": false, 00:31:11.642 "enable_placement_id": 0, 00:31:11.642 "enable_zerocopy_send_server": false, 00:31:11.642 "enable_zerocopy_send_client": false, 00:31:11.642 "zerocopy_threshold": 0, 00:31:11.642 "tls_version": 0, 00:31:11.642 "enable_ktls": false 00:31:11.642 } 00:31:11.642 } 00:31:11.642 ] 00:31:11.642 }, 00:31:11.642 { 00:31:11.642 "subsystem": "vmd", 00:31:11.642 "config": [] 00:31:11.642 }, 00:31:11.642 { 00:31:11.642 "subsystem": "accel", 00:31:11.642 "config": [ 00:31:11.642 { 00:31:11.642 "method": "accel_set_options", 00:31:11.642 "params": { 00:31:11.642 "small_cache_size": 128, 00:31:11.642 "large_cache_size": 16, 00:31:11.642 "task_count": 2048, 00:31:11.642 "sequence_count": 2048, 00:31:11.642 "buf_count": 2048 00:31:11.642 } 00:31:11.642 } 00:31:11.642 ] 00:31:11.642 }, 00:31:11.642 { 00:31:11.642 "subsystem": "bdev", 00:31:11.642 "config": [ 00:31:11.642 { 00:31:11.642 "method": "bdev_set_options", 00:31:11.642 "params": { 00:31:11.642 "bdev_io_pool_size": 65535, 00:31:11.642 "bdev_io_cache_size": 256, 00:31:11.642 "bdev_auto_examine": true, 00:31:11.642 "iobuf_small_cache_size": 128, 00:31:11.642 "iobuf_large_cache_size": 16 00:31:11.642 } 00:31:11.642 }, 00:31:11.642 { 00:31:11.642 "method": "bdev_raid_set_options", 00:31:11.642 "params": { 00:31:11.642 "process_window_size_kb": 1024, 00:31:11.642 "process_max_bandwidth_mb_sec": 0 00:31:11.642 } 00:31:11.642 }, 00:31:11.642 { 00:31:11.642 "method": "bdev_iscsi_set_options", 00:31:11.642 "params": { 00:31:11.642 "timeout_sec": 30 00:31:11.642 } 00:31:11.642 }, 00:31:11.642 { 00:31:11.642 "method": "bdev_nvme_set_options", 00:31:11.642 "params": { 00:31:11.642 "action_on_timeout": "none", 00:31:11.642 "timeout_us": 0, 00:31:11.642 "timeout_admin_us": 0, 00:31:11.642 "keep_alive_timeout_ms": 10000, 00:31:11.642 "arbitration_burst": 0, 00:31:11.642 "low_priority_weight": 0, 00:31:11.642 "medium_priority_weight": 0, 00:31:11.642 "high_priority_weight": 0, 00:31:11.642 "nvme_adminq_poll_period_us": 10000, 00:31:11.642 "nvme_ioq_poll_period_us": 0, 00:31:11.642 "io_queue_requests": 512, 00:31:11.642 "delay_cmd_submit": true, 00:31:11.642 "transport_retry_count": 4, 00:31:11.642 "bdev_retry_count": 3, 00:31:11.642 "transport_ack_timeout": 0, 00:31:11.642 "ctrlr_loss_timeout_sec": 0, 00:31:11.642 "reconnect_delay_sec": 0, 00:31:11.642 "fast_io_fail_timeout_sec": 0, 00:31:11.642 "disable_auto_failback": false, 00:31:11.642 "generate_uuids": false, 00:31:11.642 "transport_tos": 0, 00:31:11.642 "nvme_error_stat": false, 00:31:11.642 "rdma_srq_size": 0, 00:31:11.642 "io_path_stat": false, 00:31:11.642 "allow_accel_sequence": false, 00:31:11.642 "rdma_max_cq_size": 0, 00:31:11.642 "rdma_cm_event_timeout_ms": 0, 00:31:11.642 "dhchap_digests": [ 00:31:11.642 "sha256", 00:31:11.642 "sha384", 00:31:11.642 "sha512" 00:31:11.642 ], 00:31:11.642 "dhchap_dhgroups": [ 00:31:11.642 "null", 00:31:11.642 "ffdhe2048", 00:31:11.642 "ffdhe3072", 00:31:11.642 "ffdhe4096", 00:31:11.642 "ffdhe6144", 00:31:11.642 "ffdhe8192" 00:31:11.642 ] 00:31:11.642 } 00:31:11.642 }, 00:31:11.642 { 00:31:11.642 "method": "bdev_nvme_attach_controller", 00:31:11.642 "params": { 00:31:11.642 "name": "nvme0", 00:31:11.642 "trtype": "TCP", 00:31:11.642 "adrfam": "IPv4", 00:31:11.642 "traddr": "127.0.0.1", 00:31:11.642 "trsvcid": "4420", 00:31:11.642 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:11.642 "prchk_reftag": false, 00:31:11.642 "prchk_guard": false, 00:31:11.642 "ctrlr_loss_timeout_sec": 0, 00:31:11.642 "reconnect_delay_sec": 0, 00:31:11.642 "fast_io_fail_timeout_sec": 0, 00:31:11.642 "psk": "key0", 00:31:11.642 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:11.642 "hdgst": false, 00:31:11.642 "ddgst": false, 00:31:11.642 "multipath": "multipath" 00:31:11.642 } 00:31:11.642 }, 00:31:11.642 { 00:31:11.642 "method": "bdev_nvme_set_hotplug", 00:31:11.642 "params": { 00:31:11.642 "period_us": 100000, 00:31:11.642 "enable": false 00:31:11.642 } 00:31:11.642 }, 00:31:11.642 { 00:31:11.642 "method": "bdev_wait_for_examine" 00:31:11.642 } 00:31:11.642 ] 00:31:11.642 }, 00:31:11.642 { 00:31:11.642 "subsystem": "nbd", 00:31:11.642 "config": [] 00:31:11.642 } 00:31:11.642 ] 00:31:11.642 }' 00:31:11.642 14:35:39 keyring_file -- keyring/file.sh@115 -- # killprocess 92603 00:31:11.642 14:35:39 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 92603 ']' 00:31:11.642 14:35:39 keyring_file -- common/autotest_common.sh@956 -- # kill -0 92603 00:31:11.642 14:35:39 keyring_file -- common/autotest_common.sh@957 -- # uname 00:31:11.642 14:35:39 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:11.642 14:35:39 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 92603 00:31:11.642 killing process with pid 92603 00:31:11.642 Received shutdown signal, test time was about 1.000000 seconds 00:31:11.642 00:31:11.642 Latency(us) 00:31:11.642 [2024-11-06T14:35:39.277Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:11.642 [2024-11-06T14:35:39.277Z] =================================================================================================================== 00:31:11.642 [2024-11-06T14:35:39.277Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:11.642 14:35:39 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:31:11.642 14:35:39 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:31:11.642 14:35:39 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 92603' 00:31:11.642 14:35:39 keyring_file -- common/autotest_common.sh@971 -- # kill 92603 00:31:11.642 14:35:39 keyring_file -- common/autotest_common.sh@976 -- # wait 92603 00:31:13.021 14:35:40 keyring_file -- keyring/file.sh@118 -- # bperfpid=92846 00:31:13.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:13.021 14:35:40 keyring_file -- keyring/file.sh@120 -- # waitforlisten 92846 /var/tmp/bperf.sock 00:31:13.021 14:35:40 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 92846 ']' 00:31:13.021 14:35:40 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:31:13.021 14:35:40 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:13.021 14:35:40 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:31:13.021 "subsystems": [ 00:31:13.021 { 00:31:13.021 "subsystem": "keyring", 00:31:13.021 "config": [ 00:31:13.021 { 00:31:13.021 "method": "keyring_file_add_key", 00:31:13.021 "params": { 00:31:13.021 "name": "key0", 00:31:13.021 "path": "/tmp/tmp.f6H2uZ7w5g" 00:31:13.021 } 00:31:13.021 }, 00:31:13.021 { 00:31:13.021 "method": "keyring_file_add_key", 00:31:13.021 "params": { 00:31:13.021 "name": "key1", 00:31:13.021 "path": "/tmp/tmp.NfmLUuyT6h" 00:31:13.021 } 00:31:13.021 } 00:31:13.021 ] 00:31:13.021 }, 00:31:13.021 { 00:31:13.021 "subsystem": "iobuf", 00:31:13.021 "config": [ 00:31:13.021 { 00:31:13.021 "method": "iobuf_set_options", 00:31:13.021 "params": { 00:31:13.021 "small_pool_count": 8192, 00:31:13.021 "large_pool_count": 1024, 00:31:13.021 "small_bufsize": 8192, 00:31:13.021 "large_bufsize": 135168, 00:31:13.021 "enable_numa": false 00:31:13.021 } 00:31:13.021 } 00:31:13.021 ] 00:31:13.021 }, 00:31:13.021 { 00:31:13.021 "subsystem": "sock", 00:31:13.021 "config": [ 00:31:13.021 { 00:31:13.021 "method": "sock_set_default_impl", 00:31:13.021 "params": { 00:31:13.021 "impl_name": "uring" 00:31:13.021 } 00:31:13.021 }, 00:31:13.021 { 00:31:13.021 "method": "sock_impl_set_options", 00:31:13.021 "params": { 00:31:13.021 "impl_name": "ssl", 00:31:13.021 "recv_buf_size": 4096, 00:31:13.021 "send_buf_size": 4096, 00:31:13.021 "enable_recv_pipe": true, 00:31:13.021 "enable_quickack": false, 00:31:13.021 "enable_placement_id": 0, 00:31:13.021 "enable_zerocopy_send_server": true, 00:31:13.021 "enable_zerocopy_send_client": false, 00:31:13.021 "zerocopy_threshold": 0, 00:31:13.021 "tls_version": 0, 00:31:13.021 "enable_ktls": false 00:31:13.021 } 00:31:13.021 }, 00:31:13.021 { 00:31:13.021 "method": "sock_impl_set_options", 00:31:13.021 "params": { 00:31:13.021 "impl_name": "posix", 00:31:13.021 "recv_buf_size": 2097152, 00:31:13.021 "send_buf_size": 2097152, 00:31:13.021 "enable_recv_pipe": true, 00:31:13.021 "enable_quickack": false, 00:31:13.021 "enable_placement_id": 0, 00:31:13.021 "enable_zerocopy_send_server": true, 00:31:13.021 "enable_zerocopy_send_client": false, 00:31:13.021 "zerocopy_threshold": 0, 00:31:13.021 "tls_version": 0, 00:31:13.021 "enable_ktls": false 00:31:13.021 } 00:31:13.021 }, 00:31:13.021 { 00:31:13.021 "method": "sock_impl_set_options", 00:31:13.021 "params": { 00:31:13.021 "impl_name": "uring", 00:31:13.021 "recv_buf_size": 2097152, 00:31:13.021 "send_buf_size": 2097152, 00:31:13.021 "enable_recv_pipe": true, 00:31:13.021 "enable_quickack": false, 00:31:13.021 "enable_placement_id": 0, 00:31:13.021 "enable_zerocopy_send_server": false, 00:31:13.021 "enable_zerocopy_send_client": false, 00:31:13.021 "zerocopy_threshold": 0, 00:31:13.021 "tls_version": 0, 00:31:13.021 "enable_ktls": false 00:31:13.022 } 00:31:13.022 } 00:31:13.022 ] 00:31:13.022 }, 00:31:13.022 { 00:31:13.022 "subsystem": "vmd", 00:31:13.022 "config": [] 00:31:13.022 }, 00:31:13.022 { 00:31:13.022 "subsystem": "accel", 00:31:13.022 "config": [ 00:31:13.022 { 00:31:13.022 "method": "accel_set_options", 00:31:13.022 "params": { 00:31:13.022 "small_cache_size": 128, 00:31:13.022 "large_cache_size": 16, 00:31:13.022 "task_count": 2048, 00:31:13.022 "sequence_count": 2048, 00:31:13.022 "buf_count": 2048 00:31:13.022 } 00:31:13.022 } 00:31:13.022 ] 00:31:13.022 }, 00:31:13.022 { 00:31:13.022 "subsystem": "bdev", 00:31:13.022 "config": [ 00:31:13.022 { 00:31:13.022 "method": "bdev_set_options", 00:31:13.022 "params": { 00:31:13.022 "bdev_io_pool_size": 65535, 00:31:13.022 "bdev_io_cache_size": 256, 00:31:13.022 "bdev_auto_examine": true, 00:31:13.022 "iobuf_small_cache_size": 128, 00:31:13.022 "iobuf_large_cache_size": 16 00:31:13.022 } 00:31:13.022 }, 00:31:13.022 { 00:31:13.022 "method": "bdev_raid_set_options", 00:31:13.022 "params": { 00:31:13.022 "process_window_size_kb": 1024, 00:31:13.022 "process_max_bandwidth_mb_sec": 0 00:31:13.022 } 00:31:13.022 }, 00:31:13.022 { 00:31:13.022 "method": "bdev_iscsi_set_options", 00:31:13.022 "params": { 00:31:13.022 "timeout_sec": 30 00:31:13.022 } 00:31:13.022 }, 00:31:13.022 { 00:31:13.022 "method": "bdev_nvme_set_options", 00:31:13.022 "params": { 00:31:13.022 "action_on_timeout": "none", 00:31:13.022 "timeout_us": 0, 00:31:13.022 "timeout_admin_us": 0, 00:31:13.022 "keep_alive_timeout_ms": 10000, 00:31:13.022 "arbitration_burst": 0, 00:31:13.022 "low_priority_weight": 0, 00:31:13.022 "medium_priority_weight": 0, 00:31:13.022 "high_priority_weight": 0, 00:31:13.022 "nvme_adminq_poll_period_us": 10000, 00:31:13.022 "nvme_ioq_poll_period_us": 0, 00:31:13.022 "io_queue_requests": 512, 00:31:13.022 "delay_cmd_submit": true, 00:31:13.022 "transport_retry_count": 4, 00:31:13.022 "bdev_retry_count": 3, 00:31:13.022 "transport_ack_timeout": 0, 00:31:13.022 "ctrlr_loss_timeout_sec": 0, 00:31:13.022 "reconnect_delay_sec": 0, 00:31:13.022 "fast_io_fail_timeout_sec": 0, 00:31:13.022 "disable_auto_failback": false, 00:31:13.022 "generate_uuids": false, 00:31:13.022 "transport_tos": 0, 00:31:13.022 "nvme_error_stat": false, 00:31:13.022 "rdma_srq_size": 0, 00:31:13.022 "io_path_stat": false, 00:31:13.022 "allow_accel_sequence": false, 00:31:13.022 "rdma_max_cq_size": 0, 00:31:13.022 "rdma_cm_event_timeout_ms": 0, 00:31:13.022 "dhchap_digests": [ 00:31:13.022 "sha256", 00:31:13.022 "sha384", 00:31:13.022 "sha512" 00:31:13.022 ], 00:31:13.022 "dhchap_dhgroups": [ 00:31:13.022 "null", 00:31:13.022 "ffdhe2048", 00:31:13.022 "ffdhe3072", 00:31:13.022 "ffdhe4096", 00:31:13.022 "ffdhe6144", 00:31:13.022 "ffdhe8192" 00:31:13.022 ] 00:31:13.022 } 00:31:13.022 }, 00:31:13.022 { 00:31:13.022 "method": "bdev_nvme_attach_controller", 00:31:13.022 "params": { 00:31:13.022 "name": "nvme0", 00:31:13.022 "trtype": "TCP", 00:31:13.022 "adrfam": "IPv4", 00:31:13.022 "traddr": "127.0.0.1", 00:31:13.022 "trsvcid": "4420", 00:31:13.022 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:13.022 "prchk_reftag": false, 00:31:13.022 "prchk_guard": false, 00:31:13.022 "ctrlr_loss_timeout_sec": 0, 00:31:13.022 "reconnect_delay_sec": 0, 00:31:13.022 "fast_io_fail_timeout_sec": 0, 00:31:13.022 "psk": "key0", 00:31:13.022 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:13.022 "hdgst": false, 00:31:13.022 "ddgst": false, 00:31:13.022 "multipath": "multipath" 00:31:13.022 } 00:31:13.022 }, 00:31:13.022 { 00:31:13.022 "method": "bdev_nvme_set_hotplug", 00:31:13.022 "params": { 00:31:13.022 "period_us": 100000, 00:31:13.022 "enable": false 00:31:13.022 } 00:31:13.022 }, 00:31:13.022 { 00:31:13.022 "method": "bdev_wait_for_examine" 00:31:13.022 } 00:31:13.022 ] 00:31:13.022 }, 00:31:13.022 { 00:31:13.022 "subsystem": "nbd", 00:31:13.022 "config": [] 00:31:13.022 } 00:31:13.022 ] 00:31:13.022 }' 00:31:13.022 14:35:40 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:13.022 14:35:40 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:13.022 14:35:40 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:13.022 14:35:40 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:13.022 [2024-11-06 14:35:40.340125] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:31:13.022 [2024-11-06 14:35:40.340472] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92846 ] 00:31:13.022 [2024-11-06 14:35:40.521317] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:13.282 [2024-11-06 14:35:40.670845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:13.540 [2024-11-06 14:35:40.999413] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:31:13.540 [2024-11-06 14:35:41.157364] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:13.799 14:35:41 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:13.799 14:35:41 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:31:13.799 14:35:41 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:31:13.799 14:35:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:13.799 14:35:41 keyring_file -- keyring/file.sh@121 -- # jq length 00:31:14.058 14:35:41 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:31:14.058 14:35:41 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:31:14.058 14:35:41 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:14.058 14:35:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:14.058 14:35:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:14.058 14:35:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:14.058 14:35:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:14.318 14:35:41 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:31:14.318 14:35:41 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:31:14.318 14:35:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:14.318 14:35:41 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:14.318 14:35:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:14.318 14:35:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:14.318 14:35:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:14.318 14:35:41 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:31:14.318 14:35:41 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:31:14.318 14:35:41 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:31:14.318 14:35:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:31:14.577 14:35:42 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:31:14.577 14:35:42 keyring_file -- keyring/file.sh@1 -- # cleanup 00:31:14.577 14:35:42 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.f6H2uZ7w5g /tmp/tmp.NfmLUuyT6h 00:31:14.577 14:35:42 keyring_file -- keyring/file.sh@20 -- # killprocess 92846 00:31:14.577 14:35:42 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 92846 ']' 00:31:14.577 14:35:42 keyring_file -- common/autotest_common.sh@956 -- # kill -0 92846 00:31:14.577 14:35:42 keyring_file -- common/autotest_common.sh@957 -- # uname 00:31:14.577 14:35:42 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:14.577 14:35:42 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 92846 00:31:14.577 killing process with pid 92846 00:31:14.577 Received shutdown signal, test time was about 1.000000 seconds 00:31:14.578 00:31:14.578 Latency(us) 00:31:14.578 [2024-11-06T14:35:42.213Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:14.578 [2024-11-06T14:35:42.213Z] =================================================================================================================== 00:31:14.578 [2024-11-06T14:35:42.213Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:31:14.578 14:35:42 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:31:14.578 14:35:42 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:31:14.578 14:35:42 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 92846' 00:31:14.578 14:35:42 keyring_file -- common/autotest_common.sh@971 -- # kill 92846 00:31:14.578 14:35:42 keyring_file -- common/autotest_common.sh@976 -- # wait 92846 00:31:15.957 14:35:43 keyring_file -- keyring/file.sh@21 -- # killprocess 92580 00:31:15.957 14:35:43 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 92580 ']' 00:31:15.957 14:35:43 keyring_file -- common/autotest_common.sh@956 -- # kill -0 92580 00:31:15.957 14:35:43 keyring_file -- common/autotest_common.sh@957 -- # uname 00:31:15.957 14:35:43 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:15.957 14:35:43 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 92580 00:31:15.957 killing process with pid 92580 00:31:15.957 14:35:43 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:31:15.957 14:35:43 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:31:15.957 14:35:43 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 92580' 00:31:15.957 14:35:43 keyring_file -- common/autotest_common.sh@971 -- # kill 92580 00:31:15.957 14:35:43 keyring_file -- common/autotest_common.sh@976 -- # wait 92580 00:31:18.492 ************************************ 00:31:18.492 END TEST keyring_file 00:31:18.492 ************************************ 00:31:18.492 00:31:18.492 real 0m18.825s 00:31:18.492 user 0m39.887s 00:31:18.492 sys 0m3.839s 00:31:18.492 14:35:46 keyring_file -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:18.492 14:35:46 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:18.492 14:35:46 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:31:18.492 14:35:46 -- spdk/autotest.sh@290 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:31:18.492 14:35:46 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:31:18.492 14:35:46 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:18.492 14:35:46 -- common/autotest_common.sh@10 -- # set +x 00:31:18.492 ************************************ 00:31:18.492 START TEST keyring_linux 00:31:18.492 ************************************ 00:31:18.492 14:35:46 keyring_linux -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:31:18.492 Joined session keyring: 711898156 00:31:18.752 * Looking for test storage... 00:31:18.752 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:31:18.752 14:35:46 keyring_linux -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:18.752 14:35:46 keyring_linux -- common/autotest_common.sh@1691 -- # lcov --version 00:31:18.752 14:35:46 keyring_linux -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:18.752 14:35:46 keyring_linux -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:18.752 14:35:46 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:18.752 14:35:46 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:18.752 14:35:46 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:18.752 14:35:46 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:31:18.752 14:35:46 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:31:18.752 14:35:46 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:31:18.752 14:35:46 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:31:18.752 14:35:46 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:31:18.752 14:35:46 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:31:18.752 14:35:46 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:31:18.752 14:35:46 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:18.752 14:35:46 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:31:18.752 14:35:46 keyring_linux -- scripts/common.sh@345 -- # : 1 00:31:18.752 14:35:46 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:18.752 14:35:46 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:18.752 14:35:46 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:31:18.752 14:35:46 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:31:18.752 14:35:46 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:18.752 14:35:46 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:31:18.752 14:35:46 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:31:18.752 14:35:46 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:31:18.752 14:35:46 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:31:18.752 14:35:46 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:18.752 14:35:46 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:31:18.752 14:35:46 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:31:18.752 14:35:46 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:18.752 14:35:46 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:18.752 14:35:46 keyring_linux -- scripts/common.sh@368 -- # return 0 00:31:18.752 14:35:46 keyring_linux -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:18.752 14:35:46 keyring_linux -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:18.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:18.752 --rc genhtml_branch_coverage=1 00:31:18.752 --rc genhtml_function_coverage=1 00:31:18.752 --rc genhtml_legend=1 00:31:18.752 --rc geninfo_all_blocks=1 00:31:18.752 --rc geninfo_unexecuted_blocks=1 00:31:18.752 00:31:18.752 ' 00:31:18.752 14:35:46 keyring_linux -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:18.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:18.752 --rc genhtml_branch_coverage=1 00:31:18.752 --rc genhtml_function_coverage=1 00:31:18.752 --rc genhtml_legend=1 00:31:18.752 --rc geninfo_all_blocks=1 00:31:18.752 --rc geninfo_unexecuted_blocks=1 00:31:18.752 00:31:18.752 ' 00:31:18.752 14:35:46 keyring_linux -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:18.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:18.752 --rc genhtml_branch_coverage=1 00:31:18.752 --rc genhtml_function_coverage=1 00:31:18.752 --rc genhtml_legend=1 00:31:18.752 --rc geninfo_all_blocks=1 00:31:18.752 --rc geninfo_unexecuted_blocks=1 00:31:18.752 00:31:18.752 ' 00:31:18.752 14:35:46 keyring_linux -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:18.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:18.752 --rc genhtml_branch_coverage=1 00:31:18.752 --rc genhtml_function_coverage=1 00:31:18.752 --rc genhtml_legend=1 00:31:18.752 --rc geninfo_all_blocks=1 00:31:18.752 --rc geninfo_unexecuted_blocks=1 00:31:18.752 00:31:18.752 ' 00:31:18.752 14:35:46 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:31:18.752 14:35:46 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:31:18.752 14:35:46 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:31:18.752 14:35:46 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:18.752 14:35:46 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:18.752 14:35:46 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:18.752 14:35:46 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:18.752 14:35:46 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:18.752 14:35:46 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:18.752 14:35:46 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:18.752 14:35:46 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:18.752 14:35:46 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:18.752 14:35:46 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:18.752 14:35:46 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:31:18.752 14:35:46 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=406d54d0-5e94-472a-a2b3-4291f3ac81e0 00:31:18.752 14:35:46 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:18.752 14:35:46 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:18.752 14:35:46 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:31:18.752 14:35:46 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:18.752 14:35:46 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:18.752 14:35:46 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:31:18.752 14:35:46 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:18.752 14:35:46 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:18.752 14:35:46 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:18.753 14:35:46 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:18.753 14:35:46 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:18.753 14:35:46 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:18.753 14:35:46 keyring_linux -- paths/export.sh@5 -- # export PATH 00:31:18.753 14:35:46 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:18.753 14:35:46 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:31:18.753 14:35:46 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:18.753 14:35:46 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:18.753 14:35:46 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:18.753 14:35:46 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:18.753 14:35:46 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:18.753 14:35:46 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:18.753 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:18.753 14:35:46 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:18.753 14:35:46 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:18.753 14:35:46 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:18.753 14:35:46 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:31:18.753 14:35:46 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:31:18.753 14:35:46 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:31:18.753 14:35:46 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:31:18.753 14:35:46 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:31:18.753 14:35:46 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:31:18.753 14:35:46 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:31:18.753 14:35:46 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:31:18.753 14:35:46 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:31:18.753 14:35:46 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:31:19.013 14:35:46 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:31:19.013 14:35:46 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:31:19.013 14:35:46 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:31:19.013 14:35:46 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:31:19.013 14:35:46 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:31:19.013 14:35:46 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:31:19.013 14:35:46 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:31:19.013 14:35:46 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:31:19.013 14:35:46 keyring_linux -- nvmf/common.sh@733 -- # python - 00:31:19.013 14:35:46 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:31:19.013 14:35:46 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:31:19.013 /tmp/:spdk-test:key0 00:31:19.013 14:35:46 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:31:19.013 14:35:46 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:31:19.013 14:35:46 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:31:19.013 14:35:46 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:31:19.013 14:35:46 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:31:19.013 14:35:46 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:31:19.013 14:35:46 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:31:19.013 14:35:46 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:31:19.013 14:35:46 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:31:19.013 14:35:46 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:31:19.013 14:35:46 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:31:19.013 14:35:46 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:31:19.013 14:35:46 keyring_linux -- nvmf/common.sh@733 -- # python - 00:31:19.013 14:35:46 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:31:19.013 /tmp/:spdk-test:key1 00:31:19.013 14:35:46 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:31:19.013 14:35:46 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=93009 00:31:19.013 14:35:46 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:19.013 14:35:46 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 93009 00:31:19.013 14:35:46 keyring_linux -- common/autotest_common.sh@833 -- # '[' -z 93009 ']' 00:31:19.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:19.013 14:35:46 keyring_linux -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:19.013 14:35:46 keyring_linux -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:19.013 14:35:46 keyring_linux -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:19.013 14:35:46 keyring_linux -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:19.013 14:35:46 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:31:19.013 [2024-11-06 14:35:46.603060] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:31:19.013 [2024-11-06 14:35:46.603440] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93009 ] 00:31:19.272 [2024-11-06 14:35:46.786449] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:19.531 [2024-11-06 14:35:46.930610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:19.790 [2024-11-06 14:35:47.239390] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:31:20.358 14:35:47 keyring_linux -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:20.358 14:35:47 keyring_linux -- common/autotest_common.sh@866 -- # return 0 00:31:20.358 14:35:47 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:31:20.358 14:35:47 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.358 14:35:47 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:31:20.358 [2024-11-06 14:35:47.918384] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:20.358 null0 00:31:20.358 [2024-11-06 14:35:47.950331] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:31:20.358 [2024-11-06 14:35:47.950704] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:20.358 14:35:47 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.358 14:35:47 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:31:20.358 838614095 00:31:20.358 14:35:47 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:31:20.358 190314271 00:31:20.358 14:35:47 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=93027 00:31:20.358 14:35:47 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:31:20.358 14:35:47 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 93027 /var/tmp/bperf.sock 00:31:20.358 14:35:47 keyring_linux -- common/autotest_common.sh@833 -- # '[' -z 93027 ']' 00:31:20.358 14:35:47 keyring_linux -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:20.358 14:35:47 keyring_linux -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:20.358 14:35:47 keyring_linux -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:20.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:20.358 14:35:47 keyring_linux -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:20.358 14:35:47 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:31:20.617 [2024-11-06 14:35:48.082332] Starting SPDK v25.01-pre git sha1 d1c46ed8e / DPDK 24.03.0 initialization... 00:31:20.617 [2024-11-06 14:35:48.082456] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93027 ] 00:31:20.876 [2024-11-06 14:35:48.264181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:20.876 [2024-11-06 14:35:48.408023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:21.444 14:35:48 keyring_linux -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:21.444 14:35:48 keyring_linux -- common/autotest_common.sh@866 -- # return 0 00:31:21.444 14:35:48 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:31:21.444 14:35:48 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:31:21.704 14:35:49 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:31:21.704 14:35:49 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:21.962 [2024-11-06 14:35:49.555680] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:31:22.230 14:35:49 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:31:22.230 14:35:49 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:31:22.489 [2024-11-06 14:35:49.896083] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:22.489 nvme0n1 00:31:22.489 14:35:49 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:31:22.489 14:35:49 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:31:22.489 14:35:49 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:31:22.489 14:35:50 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:31:22.489 14:35:50 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:31:22.489 14:35:50 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:22.748 14:35:50 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:31:22.748 14:35:50 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:31:22.748 14:35:50 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:31:22.748 14:35:50 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:31:22.748 14:35:50 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:22.748 14:35:50 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:31:22.748 14:35:50 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:23.006 14:35:50 keyring_linux -- keyring/linux.sh@25 -- # sn=838614095 00:31:23.006 14:35:50 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:31:23.006 14:35:50 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:31:23.006 14:35:50 keyring_linux -- keyring/linux.sh@26 -- # [[ 838614095 == \8\3\8\6\1\4\0\9\5 ]] 00:31:23.006 14:35:50 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 838614095 00:31:23.007 14:35:50 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:31:23.007 14:35:50 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:23.007 Running I/O for 1 seconds... 00:31:23.943 13553.00 IOPS, 52.94 MiB/s 00:31:23.943 Latency(us) 00:31:23.943 [2024-11-06T14:35:51.578Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:23.943 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:31:23.943 nvme0n1 : 1.01 13550.62 52.93 0.00 0.00 9397.62 7737.99 16528.76 00:31:23.943 [2024-11-06T14:35:51.578Z] =================================================================================================================== 00:31:23.943 [2024-11-06T14:35:51.578Z] Total : 13550.62 52.93 0.00 0.00 9397.62 7737.99 16528.76 00:31:23.943 { 00:31:23.943 "results": [ 00:31:23.943 { 00:31:23.943 "job": "nvme0n1", 00:31:23.943 "core_mask": "0x2", 00:31:23.943 "workload": "randread", 00:31:23.943 "status": "finished", 00:31:23.943 "queue_depth": 128, 00:31:23.944 "io_size": 4096, 00:31:23.944 "runtime": 1.009622, 00:31:23.944 "iops": 13550.615973106767, 00:31:23.944 "mibps": 52.93209364494831, 00:31:23.944 "io_failed": 0, 00:31:23.944 "io_timeout": 0, 00:31:23.944 "avg_latency_us": 9397.622027324267, 00:31:23.944 "min_latency_us": 7737.985542168674, 00:31:23.944 "max_latency_us": 16528.758232931727 00:31:23.944 } 00:31:23.944 ], 00:31:23.944 "core_count": 1 00:31:23.944 } 00:31:23.944 14:35:51 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:31:23.944 14:35:51 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:31:24.203 14:35:51 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:31:24.203 14:35:51 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:31:24.203 14:35:51 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:31:24.203 14:35:51 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:31:24.203 14:35:51 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:31:24.203 14:35:51 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:24.462 14:35:52 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:31:24.462 14:35:52 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:31:24.462 14:35:52 keyring_linux -- keyring/linux.sh@23 -- # return 00:31:24.462 14:35:52 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:31:24.462 14:35:52 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:31:24.462 14:35:52 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:31:24.462 14:35:52 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:31:24.462 14:35:52 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:24.462 14:35:52 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:31:24.462 14:35:52 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:24.462 14:35:52 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:31:24.462 14:35:52 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:31:24.722 [2024-11-06 14:35:52.248433] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:31:24.722 [2024-11-06 14:35:52.248531] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000030280 (107): Transport endpoint is not connected 00:31:24.722 [2024-11-06 14:35:52.249491] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000030280 (9): Bad file descriptor 00:31:24.722 [2024-11-06 14:35:52.250479] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:31:24.722 [2024-11-06 14:35:52.250515] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:31:24.722 [2024-11-06 14:35:52.250535] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:31:24.722 [2024-11-06 14:35:52.250550] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:31:24.722 request: 00:31:24.722 { 00:31:24.722 "name": "nvme0", 00:31:24.722 "trtype": "tcp", 00:31:24.722 "traddr": "127.0.0.1", 00:31:24.722 "adrfam": "ipv4", 00:31:24.722 "trsvcid": "4420", 00:31:24.722 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:24.722 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:24.722 "prchk_reftag": false, 00:31:24.722 "prchk_guard": false, 00:31:24.722 "hdgst": false, 00:31:24.722 "ddgst": false, 00:31:24.722 "psk": ":spdk-test:key1", 00:31:24.722 "allow_unrecognized_csi": false, 00:31:24.722 "method": "bdev_nvme_attach_controller", 00:31:24.722 "req_id": 1 00:31:24.722 } 00:31:24.722 Got JSON-RPC error response 00:31:24.722 response: 00:31:24.722 { 00:31:24.722 "code": -5, 00:31:24.722 "message": "Input/output error" 00:31:24.722 } 00:31:24.722 14:35:52 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:31:24.722 14:35:52 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:24.722 14:35:52 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:24.722 14:35:52 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:24.722 14:35:52 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:31:24.722 14:35:52 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:31:24.722 14:35:52 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:31:24.722 14:35:52 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:31:24.722 14:35:52 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:31:24.722 14:35:52 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:31:24.722 14:35:52 keyring_linux -- keyring/linux.sh@33 -- # sn=838614095 00:31:24.722 14:35:52 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 838614095 00:31:24.722 1 links removed 00:31:24.722 14:35:52 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:31:24.722 14:35:52 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:31:24.722 14:35:52 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:31:24.722 14:35:52 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:31:24.722 14:35:52 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:31:24.722 14:35:52 keyring_linux -- keyring/linux.sh@33 -- # sn=190314271 00:31:24.722 14:35:52 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 190314271 00:31:24.722 1 links removed 00:31:24.722 14:35:52 keyring_linux -- keyring/linux.sh@41 -- # killprocess 93027 00:31:24.722 14:35:52 keyring_linux -- common/autotest_common.sh@952 -- # '[' -z 93027 ']' 00:31:24.722 14:35:52 keyring_linux -- common/autotest_common.sh@956 -- # kill -0 93027 00:31:24.722 14:35:52 keyring_linux -- common/autotest_common.sh@957 -- # uname 00:31:24.722 14:35:52 keyring_linux -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:24.722 14:35:52 keyring_linux -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 93027 00:31:24.722 killing process with pid 93027 00:31:24.722 Received shutdown signal, test time was about 1.000000 seconds 00:31:24.722 00:31:24.722 Latency(us) 00:31:24.722 [2024-11-06T14:35:52.357Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:24.722 [2024-11-06T14:35:52.357Z] =================================================================================================================== 00:31:24.722 [2024-11-06T14:35:52.357Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:24.722 14:35:52 keyring_linux -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:31:24.722 14:35:52 keyring_linux -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:31:24.722 14:35:52 keyring_linux -- common/autotest_common.sh@970 -- # echo 'killing process with pid 93027' 00:31:24.722 14:35:52 keyring_linux -- common/autotest_common.sh@971 -- # kill 93027 00:31:24.722 14:35:52 keyring_linux -- common/autotest_common.sh@976 -- # wait 93027 00:31:26.102 14:35:53 keyring_linux -- keyring/linux.sh@42 -- # killprocess 93009 00:31:26.102 14:35:53 keyring_linux -- common/autotest_common.sh@952 -- # '[' -z 93009 ']' 00:31:26.102 14:35:53 keyring_linux -- common/autotest_common.sh@956 -- # kill -0 93009 00:31:26.102 14:35:53 keyring_linux -- common/autotest_common.sh@957 -- # uname 00:31:26.102 14:35:53 keyring_linux -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:26.102 14:35:53 keyring_linux -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 93009 00:31:26.102 killing process with pid 93009 00:31:26.102 14:35:53 keyring_linux -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:31:26.102 14:35:53 keyring_linux -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:31:26.102 14:35:53 keyring_linux -- common/autotest_common.sh@970 -- # echo 'killing process with pid 93009' 00:31:26.102 14:35:53 keyring_linux -- common/autotest_common.sh@971 -- # kill 93009 00:31:26.102 14:35:53 keyring_linux -- common/autotest_common.sh@976 -- # wait 93009 00:31:28.635 00:31:28.635 real 0m9.954s 00:31:28.635 user 0m15.808s 00:31:28.635 sys 0m2.086s 00:31:28.635 ************************************ 00:31:28.635 END TEST keyring_linux 00:31:28.635 ************************************ 00:31:28.635 14:35:56 keyring_linux -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:28.635 14:35:56 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:31:28.635 14:35:56 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:31:28.635 14:35:56 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:31:28.635 14:35:56 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:31:28.636 14:35:56 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:31:28.636 14:35:56 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:31:28.636 14:35:56 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:31:28.636 14:35:56 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:31:28.636 14:35:56 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:31:28.636 14:35:56 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:31:28.636 14:35:56 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:31:28.636 14:35:56 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:31:28.636 14:35:56 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:31:28.636 14:35:56 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:31:28.636 14:35:56 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:31:28.636 14:35:56 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:31:28.636 14:35:56 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:31:28.636 14:35:56 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:31:28.636 14:35:56 -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:28.636 14:35:56 -- common/autotest_common.sh@10 -- # set +x 00:31:28.636 14:35:56 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:31:28.636 14:35:56 -- common/autotest_common.sh@1394 -- # local autotest_es=0 00:31:28.636 14:35:56 -- common/autotest_common.sh@1395 -- # xtrace_disable 00:31:28.636 14:35:56 -- common/autotest_common.sh@10 -- # set +x 00:31:31.171 INFO: APP EXITING 00:31:31.171 INFO: killing all VMs 00:31:31.171 INFO: killing vhost app 00:31:31.171 INFO: EXIT DONE 00:31:32.113 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:32.113 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:31:32.113 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:31:33.050 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:33.050 Cleaning 00:31:33.050 Removing: /var/run/dpdk/spdk0/config 00:31:33.050 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:31:33.050 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:31:33.050 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:31:33.050 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:31:33.050 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:31:33.050 Removing: /var/run/dpdk/spdk0/hugepage_info 00:31:33.050 Removing: /var/run/dpdk/spdk1/config 00:31:33.050 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:31:33.050 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:31:33.050 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:31:33.050 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:31:33.050 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:31:33.050 Removing: /var/run/dpdk/spdk1/hugepage_info 00:31:33.050 Removing: /var/run/dpdk/spdk2/config 00:31:33.050 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:31:33.050 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:31:33.050 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:31:33.050 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:31:33.050 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:31:33.050 Removing: /var/run/dpdk/spdk2/hugepage_info 00:31:33.050 Removing: /var/run/dpdk/spdk3/config 00:31:33.050 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:31:33.050 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:31:33.050 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:31:33.050 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:31:33.050 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:31:33.050 Removing: /var/run/dpdk/spdk3/hugepage_info 00:31:33.050 Removing: /var/run/dpdk/spdk4/config 00:31:33.050 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:31:33.050 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:31:33.050 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:31:33.050 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:31:33.050 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:31:33.050 Removing: /var/run/dpdk/spdk4/hugepage_info 00:31:33.050 Removing: /dev/shm/nvmf_trace.0 00:31:33.050 Removing: /dev/shm/spdk_tgt_trace.pid57549 00:31:33.050 Removing: /var/run/dpdk/spdk0 00:31:33.050 Removing: /var/run/dpdk/spdk1 00:31:33.050 Removing: /var/run/dpdk/spdk2 00:31:33.050 Removing: /var/run/dpdk/spdk3 00:31:33.050 Removing: /var/run/dpdk/spdk4 00:31:33.050 Removing: /var/run/dpdk/spdk_pid57303 00:31:33.050 Removing: /var/run/dpdk/spdk_pid57549 00:31:33.050 Removing: /var/run/dpdk/spdk_pid57778 00:31:33.050 Removing: /var/run/dpdk/spdk_pid57893 00:31:33.050 Removing: /var/run/dpdk/spdk_pid57938 00:31:33.050 Removing: /var/run/dpdk/spdk_pid58077 00:31:33.050 Removing: /var/run/dpdk/spdk_pid58095 00:31:33.050 Removing: /var/run/dpdk/spdk_pid58265 00:31:33.050 Removing: /var/run/dpdk/spdk_pid58468 00:31:33.050 Removing: /var/run/dpdk/spdk_pid58640 00:31:33.050 Removing: /var/run/dpdk/spdk_pid58751 00:31:33.050 Removing: /var/run/dpdk/spdk_pid58864 00:31:33.050 Removing: /var/run/dpdk/spdk_pid58986 00:31:33.050 Removing: /var/run/dpdk/spdk_pid59094 00:31:33.050 Removing: /var/run/dpdk/spdk_pid59139 00:31:33.050 Removing: /var/run/dpdk/spdk_pid59175 00:31:33.308 Removing: /var/run/dpdk/spdk_pid59246 00:31:33.308 Removing: /var/run/dpdk/spdk_pid59376 00:31:33.308 Removing: /var/run/dpdk/spdk_pid59833 00:31:33.308 Removing: /var/run/dpdk/spdk_pid59909 00:31:33.308 Removing: /var/run/dpdk/spdk_pid59983 00:31:33.308 Removing: /var/run/dpdk/spdk_pid59999 00:31:33.308 Removing: /var/run/dpdk/spdk_pid60158 00:31:33.308 Removing: /var/run/dpdk/spdk_pid60185 00:31:33.308 Removing: /var/run/dpdk/spdk_pid60333 00:31:33.308 Removing: /var/run/dpdk/spdk_pid60355 00:31:33.308 Removing: /var/run/dpdk/spdk_pid60424 00:31:33.308 Removing: /var/run/dpdk/spdk_pid60448 00:31:33.308 Removing: /var/run/dpdk/spdk_pid60512 00:31:33.308 Removing: /var/run/dpdk/spdk_pid60534 00:31:33.308 Removing: /var/run/dpdk/spdk_pid60736 00:31:33.308 Removing: /var/run/dpdk/spdk_pid60767 00:31:33.308 Removing: /var/run/dpdk/spdk_pid60856 00:31:33.308 Removing: /var/run/dpdk/spdk_pid61225 00:31:33.308 Removing: /var/run/dpdk/spdk_pid61243 00:31:33.308 Removing: /var/run/dpdk/spdk_pid61287 00:31:33.308 Removing: /var/run/dpdk/spdk_pid61318 00:31:33.308 Removing: /var/run/dpdk/spdk_pid61351 00:31:33.308 Removing: /var/run/dpdk/spdk_pid61382 00:31:33.308 Removing: /var/run/dpdk/spdk_pid61413 00:31:33.308 Removing: /var/run/dpdk/spdk_pid61446 00:31:33.308 Removing: /var/run/dpdk/spdk_pid61477 00:31:33.308 Removing: /var/run/dpdk/spdk_pid61508 00:31:33.308 Removing: /var/run/dpdk/spdk_pid61541 00:31:33.308 Removing: /var/run/dpdk/spdk_pid61583 00:31:33.308 Removing: /var/run/dpdk/spdk_pid61613 00:31:33.308 Removing: /var/run/dpdk/spdk_pid61647 00:31:33.308 Removing: /var/run/dpdk/spdk_pid61678 00:31:33.308 Removing: /var/run/dpdk/spdk_pid61714 00:31:33.308 Removing: /var/run/dpdk/spdk_pid61742 00:31:33.308 Removing: /var/run/dpdk/spdk_pid61779 00:31:33.308 Removing: /var/run/dpdk/spdk_pid61810 00:31:33.308 Removing: /var/run/dpdk/spdk_pid61843 00:31:33.308 Removing: /var/run/dpdk/spdk_pid61891 00:31:33.308 Removing: /var/run/dpdk/spdk_pid61922 00:31:33.308 Removing: /var/run/dpdk/spdk_pid61969 00:31:33.308 Removing: /var/run/dpdk/spdk_pid62053 00:31:33.308 Removing: /var/run/dpdk/spdk_pid62099 00:31:33.308 Removing: /var/run/dpdk/spdk_pid62126 00:31:33.308 Removing: /var/run/dpdk/spdk_pid62172 00:31:33.308 Removing: /var/run/dpdk/spdk_pid62199 00:31:33.308 Removing: /var/run/dpdk/spdk_pid62223 00:31:33.308 Removing: /var/run/dpdk/spdk_pid62279 00:31:33.308 Removing: /var/run/dpdk/spdk_pid62310 00:31:33.308 Removing: /var/run/dpdk/spdk_pid62356 00:31:33.308 Removing: /var/run/dpdk/spdk_pid62383 00:31:33.308 Removing: /var/run/dpdk/spdk_pid62410 00:31:33.308 Removing: /var/run/dpdk/spdk_pid62432 00:31:33.308 Removing: /var/run/dpdk/spdk_pid62459 00:31:33.308 Removing: /var/run/dpdk/spdk_pid62486 00:31:33.308 Removing: /var/run/dpdk/spdk_pid62513 00:31:33.308 Removing: /var/run/dpdk/spdk_pid62540 00:31:33.308 Removing: /var/run/dpdk/spdk_pid62586 00:31:33.567 Removing: /var/run/dpdk/spdk_pid62630 00:31:33.567 Removing: /var/run/dpdk/spdk_pid62657 00:31:33.567 Removing: /var/run/dpdk/spdk_pid62703 00:31:33.567 Removing: /var/run/dpdk/spdk_pid62725 00:31:33.567 Removing: /var/run/dpdk/spdk_pid62750 00:31:33.567 Removing: /var/run/dpdk/spdk_pid62808 00:31:33.567 Removing: /var/run/dpdk/spdk_pid62837 00:31:33.567 Removing: /var/run/dpdk/spdk_pid62881 00:31:33.567 Removing: /var/run/dpdk/spdk_pid62906 00:31:33.567 Removing: /var/run/dpdk/spdk_pid62931 00:31:33.567 Removing: /var/run/dpdk/spdk_pid62956 00:31:33.567 Removing: /var/run/dpdk/spdk_pid62981 00:31:33.567 Removing: /var/run/dpdk/spdk_pid63007 00:31:33.567 Removing: /var/run/dpdk/spdk_pid63031 00:31:33.567 Removing: /var/run/dpdk/spdk_pid63052 00:31:33.567 Removing: /var/run/dpdk/spdk_pid63151 00:31:33.567 Removing: /var/run/dpdk/spdk_pid63255 00:31:33.567 Removing: /var/run/dpdk/spdk_pid63424 00:31:33.567 Removing: /var/run/dpdk/spdk_pid63475 00:31:33.567 Removing: /var/run/dpdk/spdk_pid63532 00:31:33.567 Removing: /var/run/dpdk/spdk_pid63573 00:31:33.567 Removing: /var/run/dpdk/spdk_pid63607 00:31:33.567 Removing: /var/run/dpdk/spdk_pid63639 00:31:33.567 Removing: /var/run/dpdk/spdk_pid63689 00:31:33.567 Removing: /var/run/dpdk/spdk_pid63722 00:31:33.567 Removing: /var/run/dpdk/spdk_pid63819 00:31:33.567 Removing: /var/run/dpdk/spdk_pid63869 00:31:33.567 Removing: /var/run/dpdk/spdk_pid63947 00:31:33.567 Removing: /var/run/dpdk/spdk_pid64084 00:31:33.567 Removing: /var/run/dpdk/spdk_pid64169 00:31:33.567 Removing: /var/run/dpdk/spdk_pid64232 00:31:33.567 Removing: /var/run/dpdk/spdk_pid64366 00:31:33.567 Removing: /var/run/dpdk/spdk_pid64426 00:31:33.567 Removing: /var/run/dpdk/spdk_pid64476 00:31:33.567 Removing: /var/run/dpdk/spdk_pid64737 00:31:33.567 Removing: /var/run/dpdk/spdk_pid64861 00:31:33.567 Removing: /var/run/dpdk/spdk_pid64901 00:31:33.567 Removing: /var/run/dpdk/spdk_pid64937 00:31:33.567 Removing: /var/run/dpdk/spdk_pid64988 00:31:33.567 Removing: /var/run/dpdk/spdk_pid65039 00:31:33.567 Removing: /var/run/dpdk/spdk_pid65095 00:31:33.567 Removing: /var/run/dpdk/spdk_pid65140 00:31:33.567 Removing: /var/run/dpdk/spdk_pid65554 00:31:33.567 Removing: /var/run/dpdk/spdk_pid65605 00:31:33.567 Removing: /var/run/dpdk/spdk_pid65994 00:31:33.567 Removing: /var/run/dpdk/spdk_pid66473 00:31:33.567 Removing: /var/run/dpdk/spdk_pid66743 00:31:33.567 Removing: /var/run/dpdk/spdk_pid67692 00:31:33.567 Removing: /var/run/dpdk/spdk_pid68659 00:31:33.567 Removing: /var/run/dpdk/spdk_pid68794 00:31:33.567 Removing: /var/run/dpdk/spdk_pid68874 00:31:33.567 Removing: /var/run/dpdk/spdk_pid70357 00:31:33.567 Removing: /var/run/dpdk/spdk_pid70734 00:31:33.567 Removing: /var/run/dpdk/spdk_pid74255 00:31:33.567 Removing: /var/run/dpdk/spdk_pid74660 00:31:33.567 Removing: /var/run/dpdk/spdk_pid74772 00:31:33.567 Removing: /var/run/dpdk/spdk_pid74919 00:31:33.567 Removing: /var/run/dpdk/spdk_pid74954 00:31:33.827 Removing: /var/run/dpdk/spdk_pid75000 00:31:33.827 Removing: /var/run/dpdk/spdk_pid75035 00:31:33.827 Removing: /var/run/dpdk/spdk_pid75159 00:31:33.827 Removing: /var/run/dpdk/spdk_pid75301 00:31:33.827 Removing: /var/run/dpdk/spdk_pid75497 00:31:33.827 Removing: /var/run/dpdk/spdk_pid75597 00:31:33.827 Removing: /var/run/dpdk/spdk_pid75810 00:31:33.827 Removing: /var/run/dpdk/spdk_pid75917 00:31:33.827 Removing: /var/run/dpdk/spdk_pid76034 00:31:33.827 Removing: /var/run/dpdk/spdk_pid76419 00:31:33.827 Removing: /var/run/dpdk/spdk_pid76859 00:31:33.827 Removing: /var/run/dpdk/spdk_pid76860 00:31:33.827 Removing: /var/run/dpdk/spdk_pid76861 00:31:33.827 Removing: /var/run/dpdk/spdk_pid77152 00:31:33.827 Removing: /var/run/dpdk/spdk_pid77445 00:31:33.827 Removing: /var/run/dpdk/spdk_pid77455 00:31:33.827 Removing: /var/run/dpdk/spdk_pid79866 00:31:33.827 Removing: /var/run/dpdk/spdk_pid80309 00:31:33.827 Removing: /var/run/dpdk/spdk_pid80312 00:31:33.827 Removing: /var/run/dpdk/spdk_pid80661 00:31:33.827 Removing: /var/run/dpdk/spdk_pid80676 00:31:33.827 Removing: /var/run/dpdk/spdk_pid80701 00:31:33.827 Removing: /var/run/dpdk/spdk_pid80736 00:31:33.827 Removing: /var/run/dpdk/spdk_pid80742 00:31:33.827 Removing: /var/run/dpdk/spdk_pid80828 00:31:33.827 Removing: /var/run/dpdk/spdk_pid80838 00:31:33.827 Removing: /var/run/dpdk/spdk_pid80946 00:31:33.827 Removing: /var/run/dpdk/spdk_pid80953 00:31:33.827 Removing: /var/run/dpdk/spdk_pid81058 00:31:33.827 Removing: /var/run/dpdk/spdk_pid81072 00:31:33.827 Removing: /var/run/dpdk/spdk_pid81517 00:31:33.827 Removing: /var/run/dpdk/spdk_pid81564 00:31:33.827 Removing: /var/run/dpdk/spdk_pid81661 00:31:33.827 Removing: /var/run/dpdk/spdk_pid81737 00:31:33.827 Removing: /var/run/dpdk/spdk_pid82104 00:31:33.827 Removing: /var/run/dpdk/spdk_pid82307 00:31:33.827 Removing: /var/run/dpdk/spdk_pid82768 00:31:33.827 Removing: /var/run/dpdk/spdk_pid83341 00:31:33.827 Removing: /var/run/dpdk/spdk_pid84199 00:31:33.827 Removing: /var/run/dpdk/spdk_pid84872 00:31:33.827 Removing: /var/run/dpdk/spdk_pid84879 00:31:33.827 Removing: /var/run/dpdk/spdk_pid86918 00:31:33.827 Removing: /var/run/dpdk/spdk_pid86990 00:31:33.827 Removing: /var/run/dpdk/spdk_pid87065 00:31:33.827 Removing: /var/run/dpdk/spdk_pid87132 00:31:33.827 Removing: /var/run/dpdk/spdk_pid87277 00:31:33.827 Removing: /var/run/dpdk/spdk_pid87338 00:31:33.827 Removing: /var/run/dpdk/spdk_pid87405 00:31:33.827 Removing: /var/run/dpdk/spdk_pid87473 00:31:33.827 Removing: /var/run/dpdk/spdk_pid87872 00:31:33.827 Removing: /var/run/dpdk/spdk_pid89093 00:31:33.827 Removing: /var/run/dpdk/spdk_pid89246 00:31:33.827 Removing: /var/run/dpdk/spdk_pid89490 00:31:33.827 Removing: /var/run/dpdk/spdk_pid90121 00:31:33.827 Removing: /var/run/dpdk/spdk_pid90290 00:31:33.827 Removing: /var/run/dpdk/spdk_pid90458 00:31:34.086 Removing: /var/run/dpdk/spdk_pid90559 00:31:34.086 Removing: /var/run/dpdk/spdk_pid90733 00:31:34.086 Removing: /var/run/dpdk/spdk_pid90852 00:31:34.086 Removing: /var/run/dpdk/spdk_pid91593 00:31:34.086 Removing: /var/run/dpdk/spdk_pid91635 00:31:34.086 Removing: /var/run/dpdk/spdk_pid91670 00:31:34.086 Removing: /var/run/dpdk/spdk_pid92035 00:31:34.086 Removing: /var/run/dpdk/spdk_pid92067 00:31:34.086 Removing: /var/run/dpdk/spdk_pid92109 00:31:34.086 Removing: /var/run/dpdk/spdk_pid92580 00:31:34.086 Removing: /var/run/dpdk/spdk_pid92603 00:31:34.086 Removing: /var/run/dpdk/spdk_pid92846 00:31:34.086 Removing: /var/run/dpdk/spdk_pid93009 00:31:34.086 Removing: /var/run/dpdk/spdk_pid93027 00:31:34.086 Clean 00:31:34.086 14:36:01 -- common/autotest_common.sh@1451 -- # return 0 00:31:34.086 14:36:01 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:31:34.086 14:36:01 -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:34.086 14:36:01 -- common/autotest_common.sh@10 -- # set +x 00:31:34.086 14:36:01 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:31:34.086 14:36:01 -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:34.086 14:36:01 -- common/autotest_common.sh@10 -- # set +x 00:31:34.086 14:36:01 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:31:34.345 14:36:01 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:31:34.345 14:36:01 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:31:34.345 14:36:01 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:31:34.345 14:36:01 -- spdk/autotest.sh@394 -- # hostname 00:31:34.345 14:36:01 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:31:34.345 geninfo: WARNING: invalid characters removed from testname! 00:32:00.919 14:36:27 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:03.450 14:36:30 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:05.352 14:36:32 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:07.883 14:36:34 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:09.785 14:36:37 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:12.317 14:36:39 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:14.243 14:36:41 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:32:14.243 14:36:41 -- spdk/autorun.sh@1 -- $ timing_finish 00:32:14.243 14:36:41 -- common/autotest_common.sh@736 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:32:14.243 14:36:41 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:32:14.243 14:36:41 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:32:14.243 14:36:41 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:32:14.243 + [[ -n 5209 ]] 00:32:14.243 + sudo kill 5209 00:32:14.252 [Pipeline] } 00:32:14.267 [Pipeline] // timeout 00:32:14.273 [Pipeline] } 00:32:14.287 [Pipeline] // stage 00:32:14.292 [Pipeline] } 00:32:14.307 [Pipeline] // catchError 00:32:14.316 [Pipeline] stage 00:32:14.318 [Pipeline] { (Stop VM) 00:32:14.330 [Pipeline] sh 00:32:14.610 + vagrant halt 00:32:17.143 ==> default: Halting domain... 00:32:23.720 [Pipeline] sh 00:32:24.002 + vagrant destroy -f 00:32:26.568 ==> default: Removing domain... 00:32:26.840 [Pipeline] sh 00:32:27.123 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:32:27.133 [Pipeline] } 00:32:27.147 [Pipeline] // stage 00:32:27.153 [Pipeline] } 00:32:27.167 [Pipeline] // dir 00:32:27.172 [Pipeline] } 00:32:27.186 [Pipeline] // wrap 00:32:27.193 [Pipeline] } 00:32:27.202 [Pipeline] // catchError 00:32:27.210 [Pipeline] stage 00:32:27.212 [Pipeline] { (Epilogue) 00:32:27.221 [Pipeline] sh 00:32:27.502 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:32:32.813 [Pipeline] catchError 00:32:32.815 [Pipeline] { 00:32:32.839 [Pipeline] sh 00:32:33.122 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:32:33.380 Artifacts sizes are good 00:32:33.389 [Pipeline] } 00:32:33.403 [Pipeline] // catchError 00:32:33.412 [Pipeline] archiveArtifacts 00:32:33.418 Archiving artifacts 00:32:33.558 [Pipeline] cleanWs 00:32:33.571 [WS-CLEANUP] Deleting project workspace... 00:32:33.571 [WS-CLEANUP] Deferred wipeout is used... 00:32:33.577 [WS-CLEANUP] done 00:32:33.579 [Pipeline] } 00:32:33.592 [Pipeline] // stage 00:32:33.598 [Pipeline] } 00:32:33.611 [Pipeline] // node 00:32:33.616 [Pipeline] End of Pipeline 00:32:33.649 Finished: SUCCESS